Search results for: real-time optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3261

Search results for: real-time optimization

741 Cache Analysis and Software Optimizations for Faster on-Chip Network Simulations

Authors: Khyamling Parane, B. M. Prabhu Prasad, Basavaraj Talawar

Abstract:

Fast simulations are critical in reducing time to market in CMPs and SoCs. Several simulators have been used to evaluate the performance and power consumed by Network-on-Chips. Researchers and designers rely upon these simulators for design space exploration of NoC architectures. Our experiments show that simulating large NoC topologies take hours to several days for completion. To speed up the simulations, it is necessary to investigate and optimize the hotspots in simulator source code. Among several simulators available, we choose Booksim2.0, as it is being extensively used in the NoC community. In this paper, we analyze the cache and memory system behaviour of Booksim2.0 to accurately monitor input dependent performance bottlenecks. Our measurements show that cache and memory usage patterns vary widely based on the input parameters given to Booksim2.0. Based on these measurements, the cache configuration having least misses has been identified. To further reduce the cache misses, we use software optimization techniques such as removal of unused functions, loop interchanging and replacing post-increment operator with pre-increment operator for non-primitive data types. The cache misses were reduced by 18.52%, 5.34% and 3.91% by employing above technology respectively. We also employ thread parallelization and vectorization to improve the overall performance of Booksim2.0. The OpenMP programming model and SIMD are used for parallelizing and vectorizing the more time-consuming portions of Booksim2.0. Speedups of 2.93x and 3.97x were observed for the Mesh topology with 30 × 30 network size by employing thread parallelization and vectorization respectively.

Keywords: cache behaviour, network-on-chip, performance profiling, vectorization

Procedia PDF Downloads 197
740 Optimization of the Drinking Water Treatment Process Improvement of the Treated Water Quality by Using the Sludge Produced by the Water Treatment Plant

Authors: M. Derraz, M. Farhaoui

Abstract:

Problem statement: In the water treatment processes, the coagulation and flocculation processes produce sludge according to the level of the water turbidity. The aluminum sulfate is the most common coagulant used in water treatment plants of Morocco as well as many countries. It is difficult to manage Sludge produced by the treatment plant. However, it can be used in the process to improve the quality of the treated water and reduce the aluminum sulfate dose. Approach: In this study, the effectiveness of sludge was evaluated at different turbidity levels (low, medium, and high turbidity) and coagulant dosage to find optimal operational conditions. The influence of settling time was also studied. A set of jar test experiments was conducted to find the sludge and aluminum sulfate dosages in order to improve the produced water quality for different turbidity levels. Results: Results demonstrated that using sludge produced by the treatment plant can improve the quality of the produced water and reduce the aluminum sulfate using. The aluminum sulfate dosage can be reduced from 40 to 50% according to the turbidity level (10, 20, and 40 NTU). Conclusions/Recommendations: Results show that sludge can be used in order to reduce the aluminum sulfate dosage and improve the quality of treated water. The highest turbidity removal efficiency is observed within 6 mg/l of aluminum sulfate and 35 mg/l of sludge in low turbidity, 20 mg/l of aluminum sulfate and 50 mg/l of sludge in medium turbidity and 20 mg/l of aluminum sulfate and 60 mg/l of sludge in high turbidity. The turbidity removal efficiency is 97.56%, 98.96%, and 99.47% respectively for low, medium and high turbidity levels.

Keywords: coagulation process, coagulant dose, sludge reuse, turbidity removal

Procedia PDF Downloads 237
739 Effect of Saponin Enriched Soapwort Powder on Structural and Sensorial Properties of Turkish Delight

Authors: Ihsan Burak Cam, Ayhan Topuz

Abstract:

Turkish delight has been produced by bleaching the plain delight mix (refined sugar, water and starch) via soapwort extract and powdered sugar. Soapwort extract which contains high amount of saponin, is an additive used in Turkish delight and tahini halvah production to improve consistency, chewiness and color due to its bioactive saponin content by acting as emulsifier. In this study, soapwort powder has been produced by determining optimum process conditions of soapwort extract by using response-surface method. This extract has been enriched with saponin by reverse osmosis (contains %63 saponin in dry bases). Büchi mini spray dryer B-290 was used to produce spray-dried soapwort powder (aw=0.254) from the enriched soapwort concentrate. Processing steps optimization and saponin content enrichment of soapwort extract has been tested on Turkish Delight production. Delight samples, produced by soapwort powder and commercial extract (control), were compared in chewiness, springiness, stickiness, adhesiveness, hardness, color and sensorial characteristics. According to the results, all textural properties except hardness of delights produced by powder were found to be statistically different than control samples. Chewiness, springiness, stickiness, adhesiveness and hardness values of samples (delights produced by the powder / control delights) were determined to be 361.9/1406.7, 0.095/0.251, -120.3/-51.7, 781.9/1869.3, 3427.3g/3118.4g, respectively. According to the quality analysis that has been ran with the end products it has been determined that; there is no statistically negative effect of the soapwort extract and the soapwort powder on the color and the appearance of Turkish Delight.

Keywords: saponin, delight, soapwort powder, spray drying

Procedia PDF Downloads 253
738 Optimization of Personnel Selection Problems via Unconstrained Geometric Programming

Authors: Vildan Kistik, Tuncay Can

Abstract:

From a business perspective, cost and profit are two key factors for businesses. The intent of most businesses is to minimize the cost to maximize or equalize the profit, so as to provide the greatest benefit to itself. However, the physical system is very complicated because of technological constructions, rapid increase of competitive environments and similar factors. In such a system it is not easy to maximize profits or to minimize costs. Businesses must decide on the competence and competence of the personnel to be recruited, taking into consideration many criteria in selecting personnel. There are many criteria to determine the competence and competence of a staff member. Factors such as the level of education, experience, psychological and sociological position, and human relationships that exist in the field are just some of the important factors in selecting a staff for a firm. Personnel selection is a very important and costly process in terms of businesses in today's competitive market. Although there are many mathematical methods developed for the selection of personnel, unfortunately the use of these mathematical methods is rarely encountered in real life. In this study, unlike other methods, an exponential programming model was established based on the possibilities of failing in case the selected personnel was started to work. With the necessary transformations, the problem has been transformed into unconstrained Geometrical Programming problem and personnel selection problem is approached with geometric programming technique. Personnel selection scenarios for a classroom were established with the help of normal distribution and optimum solutions were obtained. In the most appropriate solutions, the personnel selection process for the classroom has been achieved with minimum cost.

Keywords: geometric programming, personnel selection, non-linear programming, operations research

Procedia PDF Downloads 271
737 A Prediction Model for Dynamic Responses of Building from Earthquake Based on Evolutionary Learning

Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park

Abstract:

The seismic responses-based structural health monitoring system has been performed to prevent seismic damage. Structural seismic damage of building is caused by the instantaneous stress concentration which is related with dynamic characteristic of earthquake. Meanwhile, seismic response analysis to estimate the dynamic responses of building demands significantly high computational cost. To prevent the failure of structural members from the characteristic of the earthquake and the significantly high computational cost for seismic response analysis, this paper presents an artificial neural network (ANN) based prediction model for dynamic responses of building considering specific time length. Through the measured dynamic responses, input and output node of the ANN are formed by the length of specific time, and adopted for the training. In the model, evolutionary radial basis function neural network (ERBFNN), that radial basis function network (RBFN) is integrated with evolutionary optimization algorithm to find variables in RBF, is implemented. The effectiveness of the proposed model is verified through an analytical study applying responses from dynamic analysis for multi-degree of freedom system to training data in ERBFNN.

Keywords: structural health monitoring, dynamic response, artificial neural network, radial basis function network, genetic algorithm

Procedia PDF Downloads 304
736 Culturable Diversity of Halophilic Bacteria in Chott Tinsilt, Algeria

Authors: Nesrine Lenchi, Salima Kebbouche-Gana, Laddada Belaid, Mohamed Lamine Khelfaoui, Mohamed Lamine Gana

Abstract:

Saline lakes are extreme hypersaline environments that are considered five to ten times saltier than seawater (150 – 300 g L-1 salt concentration). Hypersaline regions differ from each other in terms of salt concentration, chemical composition and geographical location, which determine the nature of inhabitant microorganisms. In order to explore the diversity of moderate and extreme halophiles Bacteria in Chott Tinsilt (East of Algeria), an isolation program was performed. In the first time, water samples were collected from the saltern during pre-salt harvesting phase. Salinity, pH and temperature of the sampling site were determined in situ. Chemical analysis of water sample indicated that Na +and Cl- were the most abundant ions. Isolates were obtained by plating out the samples in complex and synthetic media. In this study, seven halophiles cultures of Bacteria were isolated. Isolates were studied for Gram’s reaction, cell morphology and pigmentation. Enzymatic assays (oxidase, catalase, nitrate reductase and urease), and optimization of growth conditions were done. The results indicated that the salinity optima varied from 50 to 250 g L-1, whereas the optimum of temperature range from 25°C to 35°C. Molecular identification of the isolates was performed by sequencing the 16S rRNA gene. The results showed that these cultured isolates included members belonging to the Halomonas, Staphylococcus, Salinivibrio, Idiomarina, Halobacillus Thalassobacillus and Planococcus genera and what may represent a new bacterial genus.

Keywords: bacteria, Chott, halophilic, 16S rRNA

Procedia PDF Downloads 281
735 The Analysis of Emergency Shutdown Valves Torque Data in Terms of Its Use as a Health Indicator for System Prognostics

Authors: Ewa M. Laskowska, Jorn Vatn

Abstract:

Industry 4.0 focuses on digital optimization of industrial processes. The idea is to use extracted data in order to build a decision support model enabling use of those data for real time decision making. In terms of predictive maintenance, the desired decision support tool would be a model enabling prognostics of system's health based on the current condition of considered equipment. Within area of system prognostics and health management, a commonly used health indicator is Remaining Useful Lifetime (RUL) of a system. Because the RUL is a random variable, it has to be estimated based on available health indicators. Health indicators can be of different types and come from different sources. They can be process variables, equipment performance variables, data related to number of experienced failures, etc. The aim of this study is the analysis of performance variables of emergency shutdown valves (ESV) used in oil and gas industry. ESV is inspected periodically, and at each inspection torque and time of valve operation are registered. The data will be analyzed by means of machine learning or statistical analysis. The purpose is to investigate whether the available data could be used as a health indicator for a prognostic purpose. The second objective is to examine what is the most efficient way to incorporate the data into predictive model. The idea is to check whether the data can be applied in form of explanatory variables in Markov process or whether other stochastic processes would be a more convenient to build an RUL model based on the information coming from registered data.

Keywords: emergency shutdown valves, health indicator, prognostics, remaining useful lifetime, RUL

Procedia PDF Downloads 91
734 Optimization of Reliability Test Plans: Increase Wafer Fabrication Equipments Uptime

Authors: Swajeeth Panchangam, Arun Rajendran, Swarnim Gupta, Ahmed Zeouita

Abstract:

Semiconductor processing chambers tend to operate in controlled but aggressive operating conditions (chemistry, plasma, high temperature etc.) Owing to this, the design of this equipment requires developing robust and reliable hardware and software. Any equipment downtime due to reliability issues can have cost implications both for customers in terms of tool downtime (reduced throughput) and for equipment manufacturers in terms of high warranty costs and customer trust deficit. A thorough reliability assessment of critical parts and a plan for preventive maintenance/replacement schedules need to be done before tool shipment. This helps to save significant warranty costs and tool downtimes in the field. However, designing a proper reliability test plan to accurately demonstrate reliability targets with proper sample size and test duration is quite challenging. This is mainly because components can fail in different failure modes that fit into different Weibull beta value distributions. Without apriori Weibull beta of a failure mode under consideration, it always leads to over/under utilization of resources, which eventually end up in false positives or false negatives estimates. This paper proposes a methodology to design a reliability test plan with optimal model size/duration/both (independent of apriori Weibull beta). This methodology can be used in demonstration tests and can be extended to accelerated life tests to further decrease sample size/test duration.

Keywords: reliability, stochastics, preventive maintenance

Procedia PDF Downloads 15
733 A Proposal of Advanced Key Performance Indicators for Assessing Six Performances of Construction Projects

Authors: Wi Sung Yoo, Seung Woo Lee, Youn Kyoung Hur, Sung Hwan Kim

Abstract:

Large-scale construction projects are continuously increasing, and the need for tools to monitor and evaluate the project success is emphasized. At the construction industry level, there are limitations in deriving performance evaluation factors that reflect the diversity of construction sites and systems that can objectively evaluate and manage performance. Additionally, there are difficulties in integrating structured and unstructured data generated at construction sites and deriving improvements. In this study, we propose the Key Performance Indicators (KPIs) to enable performance evaluation that reflects the increased diversity of construction sites and the unstructured data generated, and present a model for measuring performance by the derived indicators. The comprehensive performance of a unit construction site is assessed based on 6 areas (Time, Cost, Quality, Safety, Environment, Productivity) and 26 indicators. We collect performance indicator information from 30 construction sites that meet legal standards and have been successfully performed. And We apply data augmentation and optimization techniques into establishing measurement standards for each indicator. In other words, the KPI for construction site performance evaluation presented in this study provides standards for evaluating performance in six areas using institutional requirement data and document data. This can be expanded to establish a performance evaluation system considering the scale and type of construction project. Also, they are expected to be used as a comprehensive indicator of the construction industry and used as basic data for tracking competitiveness at the national level and establishing policies.

Keywords: key performance indicator, performance measurement, structured and unstructured data, data augmentation

Procedia PDF Downloads 42
732 Conditions of the Anaerobic Digestion of Biomass

Authors: N. Boontian

Abstract:

Biological conversion of biomass to methane has received increasing attention in recent years. Grasses have been explored for their potential anaerobic digestion to methane. In this review, extensive literature data have been tabulated and classified. The influences of several parameters on the potential of these feedstocks to produce methane are presented. Lignocellulosic biomass represents a mostly unused source for biogas and ethanol production. Many factors, including lignin content, crystallinity of cellulose, and particle size, limit the digestibility of the hemicellulose and cellulose present in the lignocellulosic biomass. Pretreatments have used to improve the digestibility of the lignocellulosic biomass. Each pretreatment has its own effects on cellulose, hemicellulose and lignin, the three main components of lignocellulosic biomass. Solid-state anaerobic digestion (SS-AD) generally occurs at solid concentrations higher than 15%. In contrast, liquid anaerobic digestion (AD) handles feedstocks with solid concentrations between 0.5% and 15%. Animal manure, sewage sludge, and food waste are generally treated by liquid AD, while organic fractions of municipal solid waste (OFMSW) and lignocellulosic biomass such as crop residues and energy crops can be processed through SS-AD. An increase in operating temperature can improve both the biogas yield and the production efficiency, other practices such as using AD digestate or leachate as an inoculant or decreasing the solid content may increase biogas yield but have negative impact on production efficiency. Focus is placed on substrate pretreatment in anaerobic digestion (AD) as a means of increasing biogas yields using today’s diversified substrate sources.

Keywords: anaerobic digestion, lignocellulosic biomass, methane production, optimization, pretreatment

Procedia PDF Downloads 379
731 Commercial Automobile Insurance: A Practical Approach of the Generalized Additive Model

Authors: Nicolas Plamondon, Stuart Atkinson, Shuzi Zhou

Abstract:

The insurance industry is usually not the first topic one has in mind when thinking about applications of data science. However, the use of data science in the finance and insurance industry is growing quickly for several reasons, including an abundance of reliable customer data, ferocious competition requiring more accurate pricing, etc. Among the top use cases of data science, we find pricing optimization, customer segmentation, customer risk assessment, fraud detection, marketing, and triage analytics. The objective of this paper is to present an application of the generalized additive model (GAM) on a commercial automobile insurance product: an individually rated commercial automobile. These are vehicles used for commercial purposes, but for which there is not enough volume to apply pricing to several vehicles at the same time. The GAM model was selected as an improvement over GLM for its ease of use and its wide range of applications. The model was trained using the largest split of the data to determine model parameters. The remaining part of the data was used as testing data to verify the quality of the modeling activity. We used the Gini coefficient to evaluate the performance of the model. For long-term monitoring, commonly used metrics such as RMSE and MAE will be used. Another topic of interest in the insurance industry is to process of producing the model. We will discuss at a high level the interactions between the different teams with an insurance company that needs to work together to produce a model and then monitor the performance of the model over time. Moreover, we will discuss the regulations in place in the insurance industry. Finally, we will discuss the maintenance of the model and the fact that new data does not come constantly and that some metrics can take a long time to become meaningful.

Keywords: insurance, data science, modeling, monitoring, regulation, processes

Procedia PDF Downloads 76
730 Integrated Free Space Optical Communication and Optical Sensor Network System with Artificial Intelligence Techniques

Authors: Yibeltal Chanie Manie, Zebider Asire Munyelet

Abstract:

5G and 6G technology offers enhanced quality of service with high data transmission rates, which necessitates the implementation of the Internet of Things (IoT) in 5G/6G architecture. In this paper, we proposed the integration of free space optical communication (FSO) with fiber sensor networks for IoT applications. Recently, free-space optical communications (FSO) are gaining popularity as an effective alternative technology to the limited availability of radio frequency (RF) spectrum. FSO is gaining popularity due to flexibility, high achievable optical bandwidth, and low power consumption in several applications of communications, such as disaster recovery, last-mile connectivity, drones, surveillance, backhaul, and satellite communications. Hence, high-speed FSO is an optimal choice for wireless networks to satisfy the full potential of 5G/6G technology, offering 100 Gbit/s or more speed in IoT applications. Moreover, machine learning must be integrated into the design, planning, and optimization of future optical wireless communication networks in order to actualize this vision of intelligent processing and operation. In addition, fiber sensors are important to achieve real-time, accurate, and smart monitoring in IoT applications. Moreover, we proposed deep learning techniques to estimate the strain changes and peak wavelength of multiple Fiber Bragg grating (FBG) sensors using only the spectrum of FBGs obtained from the real experiment.

Keywords: optical sensor, artificial Intelligence, Internet of Things, free-space optics

Procedia PDF Downloads 63
729 Comparative Study of the Effects of Process Parameters on the Yield of Oil from Melon Seed (Cococynthis citrullus) and Coconut Fruit (Cocos nucifera)

Authors: Ndidi F. Amulu, Patrick E. Amulu, Gordian O. Mbah, Callistus N. Ude

Abstract:

Comparative analysis of the properties of melon seed, coconut fruit and their oil yield were evaluated in this work using standard analytical technique AOAC. The results of the analysis carried out revealed that the moisture contents of the samples studied are 11.15% (melon) and 7.59% (coconut). The crude lipid content are 46.10% (melon) and 55.15% (coconut).The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant difference (p < 0.05) in yield between the samples, with melon oil seed flour having a higher percentage range of oil yield (41.30 – 52.90%) and coconut (36.25 – 49.83%). The physical characterization of the extracted oil was also carried out. The values gotten for refractive index are 1.487 (melon seed oil) and 1.361 (coconut oil) and viscosities are 0.008 (melon seed oil) and 0.002 (coconut oil). The chemical analysis of the extracted oils shows acid value of 1.00mg NaOH/g oil (melon oil), 10.050mg NaOH/g oil (coconut oil) and saponification value of 187.00mg/KOH (melon oil) and 183.26mg/KOH (coconut oil). The iodine value of the melon oil gave 75.00mg I2/g and 81.00mg I2/g for coconut oil. A standard statistical package Minitab version 16.0 was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to optimize the leaching process. Both samples gave high oil yield at the same optimal conditions. The optimal conditions to obtain highest oil yield ≥ 52% (melon seed) and ≥ 48% (coconut seed) are solute - solvent ratio of 40g/ml, leaching time of 2hours and leaching temperature of 50oC. The two samples studied have potential of yielding oil with melon seed giving the higher yield.

Keywords: Coconut, Melon, Optimization, Processing

Procedia PDF Downloads 442
728 Optimization of Gastro-Retentive Matrix Formulation and Its Gamma Scintigraphic Evaluation

Authors: Swapnila V. Shinde, Hemant P. Joshi, Sumit R. Dhas, Dhananjaysingh B. Rajput

Abstract:

The objective of the present study is to develop hydro-dynamically balanced system for atenolol, β-blocker as a single unit floating tablet. Atenolol shows pH dependent solubility resulting into a bioavailability of 36%. Thus, site specific oral controlled release floating drug delivery system was developed. Formulation includes novice use of rate controlling polymer such as locust bean gum (LBG) in combination of HPMC K4M and gas generating agent sodium bicarbonate. Tablet was prepared by direct compression method and evaluated for physico-mechanical properties. The statistical method was utilized to optimize the effect of independent variables, namely amount of HPMC K4M, LBG and three dependent responses such as cumulative drug release, floating lag time, floating time. Graphical and mathematical analysis of the results allowed the identification and quantification of the formulation variables influencing the selected responses. To study the gastrointestinal transit of the optimized gastro-retentive formulation, in vivo gamma scintigraphy was carried out in six healthy rabbits, after radio labeling the formulation with 99mTc. The transit profiles demonstrated that the dosage form was retained in the stomach for more than 5 hrs. The study signifies the potential of the developed system for stomach targeted delivery of atenolol with improved bioavailability.

Keywords: floating tablet, factorial design, gamma scintigraphy, antihypertensive model drug, HPMC, locust bean gum

Procedia PDF Downloads 275
727 Interaction Evaluation of Silver Ion and Silver Nanoparticles with Dithizone Complexes Using DFT Calculations and NMR Analysis

Authors: W. Nootcharin, S. Sujittra, K. Mayuso, K. Kornphimol, M. Rawiwan

Abstract:

Silver has distinct antibacterial properties and has been used as a component of commercial products with many applications. An increasing number of commercial products cause risks of silver effects for human and environment such as the symptoms of Argyria and the release of silver to the environment. Therefore, the detection of silver in the aquatic environment is important. The colorimetric chemosensor is designed by the basic of ligand interactions with a metal ion, leading to the change of signals for the naked-eyes which are very useful method to this application. Dithizone ligand is considered as one of the effective chelating reagents for metal ions due to its high selectivity and sensitivity of a photochromic reaction for silver as well as the linear backbone of dithizone affords the rotation of various isomeric forms. The present study is focused on the conformation and interaction of silver ion and silver nanoparticles (AgNPs) with dithizone using density functional theory (DFT). The interaction parameters were determined in term of binding energy of complexes and the geometry optimization, frequency of the structures and calculation of binding energies using density functional approaches B3LYP and the 6-31G(d,p) basis set. Moreover, the interaction of silver–dithizone complexes was supported by UV–Vis spectroscopy, FT-IR spectrum that was simulated by using B3LYP/6-31G(d,p) and 1H NMR spectra calculation using B3LYP/6-311+G(2d,p) method compared with the experimental data. The results showed the ion exchange interaction between hydrogen of dithizone and silver atom, with minimized binding energies of silver–dithizone interaction. However, the result of AgNPs in the form of complexes with dithizone. Moreover, the AgNPs-dithizone complexes were confirmed by using transmission electron microscope (TEM). Therefore, the results can be the useful information for determination of complex interaction using the analysis of computer simulations.

Keywords: silver nanoparticles, dithizone, DFT, NMR

Procedia PDF Downloads 207
726 Event Driven Dynamic Clustering and Data Aggregation in Wireless Sensor Network

Authors: Ashok V. Sutagundar, Sunilkumar S. Manvi

Abstract:

Energy, delay and bandwidth are the prime issues of wireless sensor network (WSN). Energy usage optimization and efficient bandwidth utilization are important issues in WSN. Event triggered data aggregation facilitates such optimal tasks for event affected area in WSN. Reliable delivery of the critical information to sink node is also a major challenge of WSN. To tackle these issues, we propose an event driven dynamic clustering and data aggregation scheme for WSN that enhances the life time of the network by minimizing redundant data transmission. The proposed scheme operates as follows: (1) Whenever the event is triggered, event triggered node selects the cluster head. (2) Cluster head gathers data from sensor nodes within the cluster. (3) Cluster head node identifies and classifies the events out of the collected data using Bayesian classifier. (4) Aggregation of data is done using statistical method. (5) Cluster head discovers the paths to the sink node using residual energy, path distance and bandwidth. (6) If the aggregated data is critical, cluster head sends the aggregated data over the multipath for reliable data communication. (7) Otherwise aggregated data is transmitted towards sink node over the single path which is having the more bandwidth and residual energy. The performance of the scheme is validated for various WSN scenarios to evaluate the effectiveness of the proposed approach in terms of aggregation time, cluster formation time and energy consumed for aggregation.

Keywords: wireless sensor network, dynamic clustering, data aggregation, wireless communication

Procedia PDF Downloads 451
725 From Comfort to Safety: Assessing the Influence of Car Seat Design on Driver Reaction and Performance

Authors: Sabariah Mohd Yusoff, Qamaruddin Adzeem Muhamad Murad

Abstract:

This study investigates the impact of car seat design on driver response time, addressing a critical gap in understanding how ergonomic features influence both performance and safety. Controlled driving experiments were conducted with fourteen participants (11 male, 3 female) across three locations chosen for their varying traffic conditions to account for differences in driver alertness. Participants interacted with various seat designs while performing driving tasks, and objective metrics such as braking and steering response times were meticulously recorded. Advanced statistical methods, including regression analysis and t-tests, were employed to identify design factors that significantly affect driver response times. Subjective feedback was gathered through detailed questionnaires—focused on driving experience and knowledge of response time—and in-depth interviews. This qualitative data was analyzed thematically to provide insights into driver comfort and usability preferences. The study aims to identify key seat design features that impact driver response time and to gain a deeper understanding of driver preferences for comfort and usability. The findings are expected to inform evidence-based guidelines for optimizing car seat design, ultimately enhancing driver performance and safety. The research offers valuable implications for automotive manufacturers and designers, contributing to the development of seats that improve driver response time and overall driving safety.

Keywords: car seat design, driver response time, cognitive driving, ergonomics optimization

Procedia PDF Downloads 24
724 A Geogpraphic Overview about Offshore Energy Cleantech in Portugal

Authors: Ana Pego

Abstract:

Environmental technologies were developed for decades. Clean technologies emerged a few years ago. In these perspectives, the use of cleantech technologies has become very important due the fact of new era of environmental feats. As such, the market itself has become more competitive, more collaborative towards a better use of clean technologies. This paper shows the importance of clean technologies in offshore energy sector in Portuguese market, its localization and its impact on economy. Clean technologies are directly related with renewable cluster and concomitant with economic and social resource optimization criteria, geographic aspects, climate change and soil features. Cleantech is related with regional development, socio-technical transitions in organisations. There are an economical and social combinations which allow specialisation of regions in activities, higher employment, reduce of energy costs, local knowledge spillover and, business collaboration and competitiveness. The methodology used will be quantitative (IO matrix for Portugal 2013) and qualitative (questionnaires to stakeholders). The mix of both methodologies will confirm whether the use of technologies will allow a positive impact on economic and social variables used on this model. It is expected a positive impact on Portuguese economy both in investment and employment taking in account the localization of offshore renewable activities. This means that the importance of offshore renewable investment in Portugal has a few points which should be pointed out: the increase of specialised employment, localization of specific activities in territory, and increase of value added in certain regions. The conclusion will allow researchers and organisation to compare the Portuguese model to other European regions in order to a better use of natural and human resources.

Keywords: cleantech, economic impact, localisation, territory dynamics

Procedia PDF Downloads 228
723 Study on the Spatial Vitality of Waterfront Rail Transit Station Area: A Case Study of Main Urban Area in Chongqing

Authors: Lianxue Shi

Abstract:

Urban waterfront rail transit stations exert a dual impact on both the waterfront and the transit station, resulting in a concentration of development elements in the surrounding space. In order to more effectively develop the space around the station, this study focuses on the perspective of the integration of station, city, and people. Taking Chongqing as an example, based on the Arc GIS platform, it explores the vitality of the site from the three dimensions of crowd activity heat, space facilities heat, and spatial accessibility. It conducts a comprehensive evaluation and interpretation of the vitality surrounding the waterfront rail transit station area in Chongqing. The study found that (1) the spatial vitality in the vicinity of waterfront rail transit stations is correlated with the waterfront's functional zoning and the intensity of development. Stations situated in waterfront residential and public spaces are more likely to experience a convergence of people, whereas those located in waterfront industrial areas exhibit lower levels of vitality. (2) Effective transportation accessibility plays a pivotal role in maintaining a steady flow of passengers and facilitating their movement. However, the three-dimensionality of urban space in mountainous regions is a notable challenge, leading to some stations experiencing limited accessibility. This underscores the importance of enhancing the optimization of walking space, particularly the access routes from the station to the waterfront area. (3) The density of spatial facilities around waterfront stations in old urban areas lags behind the population's needs, indicating a need to strengthen the allocation of relevant land and resources in these areas.

Keywords: rail transit station, waterfront, influence area, spatial vitality, urban vitality

Procedia PDF Downloads 31
722 Optimal Sequential Scheduling of Imperfect Maintenance Last Policy for a System Subject to Shocks

Authors: Yen-Luan Chen

Abstract:

Maintenance has a great impact on the capacity of production and on the quality of the products, and therefore, it deserves continuous improvement. Maintenance procedure done before a failure is called preventive maintenance (PM). Sequential PM, which specifies that a system should be maintained at a sequence of intervals with unequal lengths, is one of the commonly used PM policies. This article proposes a generalized sequential PM policy for a system subject to shocks with imperfect maintenance and random working time. The shocks arrive according to a non-homogeneous Poisson process (NHPP) with varied intensity function in each maintenance interval. As a shock occurs, the system suffers two types of failures with number-dependent probabilities: type-I (minor) failure, which is rectified by a minimal repair, and type-II (catastrophic) failure, which is removed by a corrective maintenance (CM). The imperfect maintenance is carried out to improve the system failure characteristic due to the altered shock process. The sequential preventive maintenance-last (PML) policy is defined as that the system is maintained before any CM occurs at a planned time Ti or at the completion of a working time in the i-th maintenance interval, whichever occurs last. At the N-th maintenance, the system is replaced rather than maintained. This article first takes up the sequential PML policy with random working time and imperfect maintenance in reliability engineering. The optimal preventive maintenance schedule that minimizes the mean cost rate of a replacement cycle is derived analytically and determined in terms of its existence and uniqueness. The proposed models provide a general framework for analyzing the maintenance policies in reliability theory.

Keywords: optimization, preventive maintenance, random working time, minimal repair, replacement, reliability

Procedia PDF Downloads 275
721 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: economic production quantity, random cost, supply chain management, vendor-managed inventory

Procedia PDF Downloads 129
720 Development of an Interactive Display-Control Layout Design System for Trains Based on Train Drivers’ Mental Models

Authors: Hyeonkyeong Yang, Minseok Son, Taekbeom Yoo, Woojin Park

Abstract:

Human error is the most salient contributing factor to railway accidents. To reduce the frequency of human errors, many researchers and train designers have adopted ergonomic design principles for designing display-control layout in rail cab. There exist a number of approaches for designing the display control layout based on optimization methods. However, the ergonomically optimized layout design may not be the best design for train drivers, since the drivers have their own mental models based on their experiences. Consequently, the drivers may prefer the existing display-control layout design over the optimal design, and even show better driving performance using the existing design compared to that using the optimal design. Thus, in addition to ergonomic design principles, train drivers’ mental models also need to be considered for designing display-control layout in rail cab. This paper developed an ergonomic assessment system of display-control layout design, and an interactive layout design system that can generate design alternatives and calculate ergonomic assessment score in real-time. The design alternatives generated from the interactive layout design system may not include the optimal design from the ergonomics point of view. However, the system’s strength is that it considers train drivers’ mental models, which can help generate alternatives that are more friendly and easier to use for train drivers. Also, with the developed system, non-experts in ergonomics, such as train drivers, can refine the design alternatives and improve ergonomic assessment score in real-time.

Keywords: display-control layout design, interactive layout design system, mental model, train drivers

Procedia PDF Downloads 306
719 Energy Efficiency and Sustainability Analytics for Reducing Carbon Emissions in Oil Refineries

Authors: Gaurav Kumar Sinha

Abstract:

The oil refining industry, significant in its energy consumption and carbon emissions, faces increasing pressure to reduce its environmental footprint. This article explores the application of energy efficiency and sustainability analytics as crucial tools for reducing carbon emissions in oil refineries. Through a comprehensive review of current practices and technologies, this study highlights innovative analytical approaches that can significantly enhance energy efficiency. We focus on the integration of advanced data analytics, including machine learning and predictive modeling, to optimize process controls and energy use. These technologies are examined for their potential to not only lower energy consumption but also reduce greenhouse gas emissions. Additionally, the article discusses the implementation of sustainability analytics to monitor and improve environmental performance across various operational facets of oil refineries. We explore case studies where predictive analytics have successfully identified opportunities for reducing energy use and emissions, providing a template for industry-wide application. The challenges associated with deploying these analytics, such as data integration and the need for skilled personnel, are also addressed. The paper concludes with strategic recommendations for oil refineries aiming to enhance their sustainability practices through the adoption of targeted analytics. By implementing these measures, refineries can achieve significant reductions in carbon emissions, aligning with global environmental goals and regulatory requirements.

Keywords: energy efficiency, sustainability analytics, carbon emissions, oil refineries, data analytics, machine learning, predictive modeling, process optimization, greenhouse gas reduction, environmental performance

Procedia PDF Downloads 31
718 The BNCT Project Using the Cf-252 Source: Monte Carlo Simulations

Authors: Marta Błażkiewicz-Mazurek, Adam Konefał

Abstract:

The project can be divided into three main parts: i. modeling the Cf-252 neutron source and conducting an experiment to verify the correctness of the obtained results, ii. design of the BNCT system infrastructure, iii. analysis of the results from the logical detector. Modeling of the Cf-252 source included designing the shape and size of the source as well as the energy and spatial distribution of emitted neutrons. Two options were considered: a point source and a cylindrical spatial source. The energy distribution corresponded to various spectra taken from specialized literature. Directionally isotropic neutron emission was simulated. The simulation results were compared with experimental values determined using the activation detector method using indium foils and cadmium shields. The relative fluence rate of thermal and resonance neutrons was compared in the chosen places in the vicinity of the source. The second part of the project related to the modeling of the BNCT infrastructure consisted of developing a simulation program taking into account all the essential components of this system. Materials with moderating, absorbing, and backscattering properties of neutrons were adopted into the project. Additionally, a gamma radiation filter was introduced into the beam output system. The analysis of the simulation results obtained using a logical detector located at the beam exit from the BNCT infrastructure included neutron energy and their spatial distribution. Optimization of the system involved changing the size and materials of the system to obtain a suitable collimated beam of thermal neutrons.

Keywords: BNCT, Monte Carlo, neutrons, simulation, modeling

Procedia PDF Downloads 30
717 Floating Oral in Situ Gelling System of Anticancer Drug

Authors: Umme Hani, Mohammed Rahmatulla, Mohammed Ghazwani, Ali Alqahtani, Yahya Alhamhoom

Abstract:

Background and introduction: Neratinib is a potent anticancer drug used for the treatment of breast cancer. It is poorly soluble at higher pH, which tends to minimize the therapeutic effects in the lower gastrointestinal tract (GIT) leading to poor bioavailability. An attempt has been made to prepare and develop a gastro-retentive system of Neratinib to improve the drug bioavailability in the GIT by enhancing the gastric retention time. Materials and methods: In the present study a three-factor at two-level (23) factorial design based optimization was used to inspect the effects of three independent variables (factors) such as sodium alginate (A), sodium bicarbonate (B) and sodium citrate (C) on the dependent variables like in vitro gelation, in vitro floating, water uptake and percentage drug release. Results: All the formulations showed pH in the range 6.7 ±0.25 to 7.4 ±0.24, percentage drug content was observed to be 96.3±0.27 to 99.5 ±0.28%, in vitro gelation observed as gelation immediate remains for an extended period. Percentage of water uptake was in the range between 9.01±0.15 to 31.01±0.25%, floating lag time was estimated form 7±0.39 to 57±0.36 sec. F4 and F5 showed floating even after 12hrs. All formulations showed a release of around 90% drug release within 12hr. It was observed that the selected independent variables affect the dependent variables. Conclusion: The developed system may be a promising and alternative approach to augment gastric retention of drugs and enhances the therapeutic efficacy of the drug.

Keywords: neratinib, 2³ factorial design, sodium alginate, floating, in situ gelling system

Procedia PDF Downloads 163
716 Sustainable Manufacturing Industries and Energy-Water Nexus Approach

Authors: Shahbaz Abbas, Lin Han Chiang Hsieh

Abstract:

The significant population growth and climate change issues have contributed to the natural resources depletion and their sustainability in the future. Manufacturing industries have a substantial impact on every country’s economy, but the sustainability of the industrial resources is challenging, and the policymakers have been developing the possible solutions to manage the sustainability of industrial resources such as raw material, energy, water, and industrial supply chain. In order to address these challenges, nexus approach is one of the optimization and modelling techniques in the recent sustainable environmental research. The interactions between the nexus components acknowledge that all components are dependent upon each other, and they are interrelated; therefore, their sustainability is also associated with each other. In addition, the nexus concept does not only provide the resources sustainability but also environmental sustainability can be achieved through nexus approach by utilizing the industrial waste as a resource for the industrial processes. Based on energy-water nexus, this study has developed a resource-energy-water for the sugar industry to understand the interactions between sugarcane, energy, and water towards the sustainable sugar industry. In particular, the focus of the research is the Taiwanese sugar industry; however, the same approach can be adapted worldwide to optimize the sustainability of sugar industries. It has been concluded that there are significant interactions between sugarcane, energy consumption, and water consumption in the sugar industry to manage the scarcity of resources in the future. The interactions between sugarcane and energy also deliver a mechanism to reuse the sugar industrial waste as a source of energy, consequently validating industrial and environmental sustainability. The desired outcomes from the nexus can be achieved with the modifications in the policy and regulations of Taiwanese industrial sector.

Keywords: energy-water nexus, environmental sustainability, industrial sustainability, natural resource management

Procedia PDF Downloads 124
715 Transformer Fault Diagnostic Predicting Model Using Support Vector Machine with Gradient Decent Optimization

Authors: R. O. Osaseri, A. R. Usiobaifo

Abstract:

The power transformer which is responsible for the voltage transformation is of great relevance in the power system and oil-immerse transformer is widely used all over the world. A prompt and proper maintenance of the transformer is of utmost importance. The dissolved gasses content in power transformer, oil is of enormous importance in detecting incipient fault of the transformer. There is a need for accurate prediction of the incipient fault in transformer oil in order to facilitate the prompt maintenance and reducing the cost and error minimization. Study on fault prediction and diagnostic has been the center of many researchers and many previous works have been reported on the use of artificial intelligence to predict incipient failure of transformer faults. In this study machine learning technique was employed by using gradient decent algorithms and Support Vector Machine (SVM) in predicting incipient fault diagnosis of transformer. The method focuses on creating a system that improves its performance on previous result and historical data. The system design approach is basically in two phases; training and testing phase. The gradient decent algorithm is trained with a training dataset while the learned algorithm is applied to a set of new data. This two dataset is used to prove the accuracy of the proposed model. In this study a transformer fault diagnostic model based on Support Vector Machine (SVM) and gradient decent algorithms has been presented with a satisfactory diagnostic capability with high percentage in predicting incipient failure of transformer faults than existing diagnostic methods.

Keywords: diagnostic model, gradient decent, machine learning, support vector machine (SVM), transformer fault

Procedia PDF Downloads 322
714 Estimation of Effective Radiation Dose Following Computed Tomography Urography at Aminu Kano Teaching Hospital, Kano Nigeria

Authors: Idris Garba, Aisha Rabiu Abdullahi, Mansur Yahuza, Akintade Dare

Abstract:

Background: CT urography (CTU) is efficient radiological examination for the evaluation of the urinary system disorders. However, patients are exposed to a significant radiation dose which is in a way associated with increased cancer risks. Objectives: To determine Computed Tomography Dose Index following CTU, and to evaluate organs equivalent doses. Materials and Methods: A prospective cohort study was carried at a tertiary institution located in Kano northwestern. Ethical clearance was sought and obtained from the research ethics board of the institution. Demographic, scan parameters and CT radiation dose data were obtained from patients that had CTU procedure. Effective dose, organ equivalent doses, and cancer risks were estimated using SPSS statistical software version 16 and CT dose calculator software. Result: A total of 56 patients were included in the study, consisting of 29 males and 27 females. The common indication for CTU examination was found to be renal cyst seen commonly among young adults (15-44yrs). CT radiation dose values in DLP, CTDI and effective dose for CTU were 2320 mGy cm, CTDIw 9.67 mGy and 35.04 mSv respectively. The probability of cancer risks was estimated to be 600 per a million CTU examinations. Conclusion: In this study, the radiation dose for CTU is considered significantly high, with increase in cancer risks probability. Wide radiation dose variations between patient doses suggest that optimization is not fulfilled yet. Patient radiation dose estimate should be taken into consideration when imaging protocols are established for CT urography.

Keywords: CT urography, cancer risks, effective dose, radiation exposure

Procedia PDF Downloads 345
713 Design Optimization of Miniature Mechanical Drive Systems Using Tolerance Analysis Approach

Authors: Eric Mxolisi Mkhondo

Abstract:

Geometrical deviations and interaction of mechanical parts influences the performance of miniature systems.These deviations tend to cause costly problems during assembly due to imperfections of components, which are invisible to a naked eye.They also tend to cause unsatisfactory performance during operation due to deformation cause by environmental conditions.One of the effective tools to manage the deviations and interaction of parts in the system is tolerance analysis.This is a quantitative tool for predicting the tolerance variations which are defined during the design process.Traditional tolerance analysis assumes that the assembly is static and the deviations come from the manufacturing discrepancies, overlooking the functionality of the whole system and deformation of parts due to effect of environmental conditions. This paper presents an integrated tolerance analysis approach for miniature system in operation.In this approach, a computer-aided design (CAD) model is developed from system’s specification.The CAD model is then used to specify the geometrical and dimensional tolerance limits (upper and lower limits) that vary component’s geometries and sizes while conforming to functional requirements.Worst-case tolerances are analyzed to determine the influenced of dimensional changes due to effects of operating temperatures.The method is used to evaluate the nominal conditions, and worse case conditions in maximum and minimum dimensions of assembled components.These three conditions will be evaluated under specific operating temperatures (-40°C,-18°C, 4°C, 26°C, 48°C, and 70°C). A case study on the mechanism of a zoom lens system is used to illustrate the effectiveness of the methodology.

Keywords: geometric dimensioning, tolerance analysis, worst-case analysis, zoom lens mechanism

Procedia PDF Downloads 165
712 Encapsulation of Probiotic Bacteria in Complex Coacervates

Authors: L. A. Bosnea, T. Moschakis, C. Biliaderis

Abstract:

Two probiotic strains of Lactobacillus paracasei subsp. paracasei (E6) and Lactobacillus paraplantarum (B1), isolated from traditional Greek dairy products, were microencapsulated by complex coacervation using whey protein isolate (WPI, 3% w/v) and gum arabic (GA, 3% w/v) solutions mixed at different polymer ratio (1:1, 2:1 and 4:1). The effect of total biopolymer concentration on cell viability was assessed using WPI and GA solutions of 1, 3 and 6% w/v at a constant ratio of 2:1. Also, several parameters were examined for optimization of the microcapsule formation, such as inoculum concentration and the effect of ionic strength. The viability of the bacterial cells during heat treatment and under simulated gut conditions was also evaluated. Among the different WPI/GA weight ratios tested (1:1, 2:1, and 4:1), the highest survival rate was observed for the coacervate structures made with the ratio of 2:1. The protection efficiency at low pH values is influenced by both concentration and the ratio of the added biopolymers. Moreover, the inoculum concentration seems to affect the efficiency of microcapsules to entrap the bacterial cells since an optimum level was noted at less than 8 log cfu/ml. Generally, entrapment of lactobacilli in the complex coacervate structure enhanced the viability of the microorganisms when exposed to a low pH environment (pH 2.0). Both encapsulated strains retained high viability in simulated gastric juice (>73%), especially in comparison with non-encapsulated (free) cells (<19%). The encapsulated lactobacilli also exhibited enhanced viability after 10–30 min of heat treatment (65oC) as well as at different NaCl concentrations (pH 4.0). Overall, the results of this study suggest that complex coacervation with WPI/GA has a potential to deliver live probiotics in low pH food systems and fermented dairy products; the complexes can dissolve at pH 7.0 (gut environment), releasing the microbial cells.

Keywords: probiotic, complex coacervation, whey, encapsulation

Procedia PDF Downloads 297