Search results for: data transfer optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29550

Search results for: data transfer optimization

27990 Policy to Improve in vitro Fertilization Outcome in Women with Poor Ovarian Response: Frozen Embryo Transfer (ET) of Accumulated Vitrified Embryos vs. Frozen ET of Accumulated Vitrified Embryos plus Fresh ET

Authors: Hwang Kwon

Abstract:

Objective: To assess the efficacy of embryo transfer (ET) of accumulated vitrified embryos and compare pregnancy outcomes between ET of thawed embryos following accumulation of vitrified embryos (frozen ET) and ET of fresh and thawed frozen embryos following accumulation of vitrified embryos (fresh ET + frozen ET). Study design: Patients were poor ovarian responders defined according to the Bologna criteria as well as a subgroup of women whose previous IVF-ET cycle through controlled ovarian stimulation (COS) yielded one or no embryos. Sixty-four frozen ETs were performed following accumulation of vitrified embryos (ACCE )(ACCE Frozen) and 51 fresh + frozen ETs were performed following accumulation of vitrified embryos (ACCE Fresh + Frozen). Positive βhCG rate, clinical pregnancy rate, ongoing pregnancy rate, and good quality embryos (%, ±SD) were compared between two groups. Results: There were more good quality embryos in the ACCE Fresh + Frozen group than in the ACCE Frozen group: 60±34.7 versus 42.9±28.9, respectively (p=0.03). Positive βhCG rate [18/64(28.2%) vs. 13/51(25.5%); p=0.75] and clinical pregnancy rate [12/64 (18.8%) vs. 11/51 (10.9%); p=0.71] were comparable between the two groups. Conclusion: Accumulation of vitrified embryos is an effective method in patients with poor ovarian response who fulfill the Bologna criteria. Pregnancy outcomes were comparable between the two groups.

Keywords: accumulation of embryos, frozen embryo transfer, poor responder, Bologna criteria

Procedia PDF Downloads 229
27989 Wind Speed Prediction Using Passive Aggregation Artificial Intelligence Model

Authors: Tarek Aboueldahab, Amin Mohamed Nassar

Abstract:

Wind energy is a fluctuating energy source unlike conventional power plants, thus, it is necessary to accurately predict short term wind speed to integrate wind energy in the electricity supply structure. To do so, we present a hybrid artificial intelligence model of short term wind speed prediction based on passive aggregation of the particle swarm optimization and neural networks. As a result, improvement of the prediction accuracy is obviously obtained compared to the standard artificial intelligence method.

Keywords: artificial intelligence, neural networks, particle swarm optimization, passive aggregation, wind speed prediction

Procedia PDF Downloads 450
27988 Frequency Recognition Models for Steady State Visual Evoked Potential Based Brain Computer Interfaces (BCIs)

Authors: Zeki Oralhan, Mahmut Tokmakçı

Abstract:

SSVEP based brain computer interface (BCI) systems have been preferred, because of high information transfer rate (ITR) and practical use. ITR is the parameter of BCI overall performance. For high ITR value, one of specification BCI system is that has high accuracy. In this study, we investigated to recognize SSVEP with shorter time and lower error rate. In the experiment, there were 8 flickers on light crystal display (LCD). Participants gazed to flicker which had 12 Hz frequency and 50% duty cycle ratio on the LCD during 10 seconds. During the experiment, EEG signals were acquired via EEG device. The EEG data was filtered in preprocessing session. After that Canonical Correlation Analysis (CCA), Multiset CCA (MsetCCA), phase constrained CCA (PCCA), and Multiway CCA (MwayCCA) methods were applied on data. The highest average accuracy value was reached when MsetCCA was applied.

Keywords: brain computer interface, canonical correlation analysis, human computer interaction, SSVEP

Procedia PDF Downloads 266
27987 Effect of Nanoparticles Concentration, pH and Agitation on Bioethanol Production by Saccharomyces cerevisiae BY4743: An Optimization Study

Authors: Adeyemi Isaac Sanusi, Gueguim E. B. Kana

Abstract:

Nanoparticles have received attention of the scientific community due to their biotechnological potentials. They exhibit advantageous size, shape and concentration-dependent catalytic, stabilizing, immunoassays and immobilization properties. This study investigates the impact of metallic oxide nanoparticles (NPs) on ethanol production by Saccharomyces cerevisiae BY4743. Nine different nanoparticles were synthesized using precipitation method and microwave treatment. The nanoparticles synthesized were characterized by Fourier Transform Infra-Red spectroscopy (FTIR), scanning electron microscopy (SEM) and transmission electron microscopy (TEM). Fermentation processes were carried out at varied NPs concentrations (0 – 0.08 wt%). Highest ethanol concentrations were achieved after 24 h using Cobalt NPs (5.07 g/l), Copper NPs (4.86 g/l) and Manganese NPs (4.74 g/l) at 0.01 wt% NPs concentrations, which represent 13%, 8.7% and 5.4% increase respectively over the control (4.47 g/l). The lowest ethanol concentration (0.17 g/l) was obtained when 0.08 wt% of Silver NPs was used. And lower ethanol concentrations were observed at higher NPs concentration. Ethanol concentration decrease after 24 h for all the processes. In all set up with NPs, the pH was observed to be stable and the stability was directly proportional to nanoparticles concentrations. These findings suggest that the presence of some of the NPs in the bioprocesses has catalytic and pH stabilizing potential. Ethanol production by Saccharomyces cerevisiae BY4743 was enhanced in the presence of Cobalt NPs, Copper NPs and Manganese NPs. Optimization study using response surface methodology (RSM) will further elucidate the impact of these nanoparticles on bioethanol production.

Keywords: agitation, bioethanol, nanoparticles concentration, optimization, pH value

Procedia PDF Downloads 188
27986 Finding Optimal Solutions to Management Problems with the use of Econometric and Multiobjective Programming

Authors: M. Moradi Dalini, M. R. Talebi

Abstract:

This research revolves around a technical method according to combines econometric and multiobjective programming to select and obtain optimal solutions to management problems. It is taken for a generation that; it is important to analyze which combination of values of the explanatory variables -in an econometric method- would point to the simultaneous achievement of the best values of the response variables. In this case, if a certain degree of conflict is viewed among the response variables, we suggest a multiobjective method in order to the results obtained from a regression analysis. In fact, with the use of a multiobjective method, we will have the best decision about the conflicting relationship between the response variables and the optimal solution. The combined multiobjective programming and econometrics benefit is an assessment of a balanced “optimal” situation among them because a find of information can hardly be extracted just by econometric techniques.

Keywords: econometrics, multiobjective optimization, management problem, optimization

Procedia PDF Downloads 82
27985 A New Method to Reduce 5G Application Layer Payload Size

Authors: Gui Yang Wu, Bo Wang, Xin Wang

Abstract:

Nowadays, 5G service-based interface architecture uses text-based payload like JSON to transfer business data between network functions, which has obvious advantages as internet services but causes unnecessarily larger traffic. In this paper, a new 5G application payload size reduction method is presented to provides the mechanism to negotiate about new capability between network functions when network communication starts up and how 5G application data are reduced according to negotiated information with peer network function. Without losing the advantages of 5G text-based payload, this method demonstrates an excellent result on application payload size reduction and does not increase the usage quota of computing resource. Implementation of this method does not impact any standards or specifications and not change any encoding or decoding functionality too. In a real 5G network, this method will contribute to network efficiency and eventually save considerable computing resources.

Keywords: 5G, JSON, payload size, service-based interface

Procedia PDF Downloads 181
27984 Assessment of Pre-Processing Influence on Near-Infrared Spectra for Predicting the Mechanical Properties of Wood

Authors: Aasheesh Raturi, Vimal Kothiyal, P. D. Semalty

Abstract:

We studied mechanical properties of Eucalyptus tereticornis using FT-NIR spectroscopy. Firstly, spectra were pre-processed to eliminate useless information. Then, prediction model was constructed by partial least squares regression. To study the influence of pre-processing on prediction of mechanical properties for NIR analysis of wood samples, we applied various pretreatment methods like straight line subtraction, constant offset elimination, vector-normalization, min-max normalization, multiple scattering. Correction, first derivative, second derivatives and their combination with other treatment such as First derivative + straight line subtraction, First derivative+ vector normalization and First derivative+ multiplicative scattering correction. The data processing methods in combination of preprocessing with different NIR regions, RMSECV, RMSEP and optimum factors/rank were obtained by optimization process of model development. More than 350 combinations were obtained during optimization process. More than one pre-processing method gave good calibration/cross-validation and prediction/test models, but only the best calibration/cross-validation and prediction/test models are reported here. The results show that one can safely use NIR region between 4000 to 7500 cm-1 with straight line subtraction, constant offset elimination, first derivative and second derivative preprocessing method which were found to be most appropriate for models development.

Keywords: FT-NIR, mechanical properties, pre-processing, PLS

Procedia PDF Downloads 361
27983 Comprehensive Analysis and Optimization of Alkaline Water Electrolysis for Green Hydrogen Production: Experimental Validation, Simulation Study, and Cost Analysis

Authors: Umair Ahmed, Muhammad Bin Irfan

Abstract:

This study focuses on designing and optimization of an alkaline water electrolyser for the production of green hydrogen. The aim is to enhance the durability and efficiency of this technology while simultaneously reducing the cost associated with the production of green hydrogen. The experimental results obtained from the alkaline water electrolyser are compared with simulated results using Aspen Plus software, allowing a comprehensive analysis and evaluation. To achieve the aforementioned goals, several design and operational parameters are investigated. The electrode material, electrolyte concentration, and operating conditions are carefully selected to maximize the efficiency and durability of the electrolyser. Additionally, cost-effective materials and manufacturing techniques are explored to decrease the overall production cost of green hydrogen. The experimental setup includes a carefully designed alkaline water electrolyser, where various performance parameters (such as hydrogen production rate, current density, and voltage) are measured. These experimental results are then compared with simulated data obtained using Aspen Plus software. The simulation model is developed based on fundamental principles and validated against the experimental data. The comparison between experimental and simulated results provides valuable insight into the performance of an alkaline water electrolyser. It helps to identify the areas where improvements can be made, both in terms of design and operation, to enhance the durability and efficiency of the system. Furthermore, the simulation results allow cost analysis providing an estimate of the overall production cost of green hydrogen. This study aims to develop a comprehensive understanding of alkaline water electrolysis technology. The findings of this research can contribute to the development of more efficient and durable electrolyser technology while reducing the cost associated with this technology. Ultimately, these advancements can pave the way for a more sustainable and economically viable hydrogen economy.

Keywords: sustainable development, green energy, green hydrogen, electrolysis technology

Procedia PDF Downloads 90
27982 Model Order Reduction Using Hybrid Genetic Algorithm and Simulated Annealing

Authors: Khaled Salah

Abstract:

Model order reduction has been one of the most challenging topics in the past years. In this paper, a hybrid solution of genetic algorithm (GA) and simulated annealing algorithm (SA) are used to approximate high-order transfer functions (TFs) to lower-order TFs. In this approach, hybrid algorithm is applied to model order reduction putting in consideration improving accuracy and preserving the properties of the original model which are two important issues for improving the performance of simulation and computation and maintaining the behavior of the original complex models being reduced. Compared to conventional mathematical methods that have been used to obtain a reduced order model of high order complex models, our proposed method provides better results in terms of reducing run-time. Thus, the proposed technique could be used in electronic design automation (EDA) tools.

Keywords: genetic algorithm, simulated annealing, model reduction, transfer function

Procedia PDF Downloads 143
27981 Numerical Prediction of Entropy Generation in Heat Exchangers

Authors: Nadia Allouache

Abstract:

The concept of second law is assumed to be important to optimize the energy losses in heat exchangers. The present study is devoted to the numerical prediction of entropy generation due to heat transfer and friction in a double tube heat exchanger partly or fully filled with a porous medium. The goal of this work is to find the optimal conditions that allow minimizing entropy generation. For this purpose, numerical modeling based on the control volume method is used to describe the flow and heat transfer phenomena in the fluid and the porous medium. Effects of the porous layer thickness, its permeability, and the effective thermal conductivity have been investigated. Unexpectedly, the fully porous heat exchanger yields a lower entropy generation than the partly porous case or the fluid case even if the friction increases the entropy generation.

Keywords: heat exchangers, porous medium, second law approach, turbulent flow

Procedia PDF Downloads 300
27980 Fair Federated Learning in Wireless Communications

Authors: Shayan Mohajer Hamidi

Abstract:

Federated Learning (FL) has emerged as a promising paradigm for training machine learning models on distributed data without the need for centralized data aggregation. In the realm of wireless communications, FL has the potential to leverage the vast amounts of data generated by wireless devices to improve model performance and enable intelligent applications. However, the fairness aspect of FL in wireless communications remains largely unexplored. This abstract presents an idea for fair federated learning in wireless communications, addressing the challenges of imbalanced data distribution, privacy preservation, and resource allocation. Firstly, the proposed approach aims to tackle the issue of imbalanced data distribution in wireless networks. In typical FL scenarios, the distribution of data across wireless devices can be highly skewed, resulting in unfair model updates. To address this, we propose a weighted aggregation strategy that assigns higher importance to devices with fewer samples during the aggregation process. By incorporating fairness-aware weighting mechanisms, the proposed approach ensures that each participating device's contribution is proportional to its data distribution, thereby mitigating the impact of data imbalance on model performance. Secondly, privacy preservation is a critical concern in federated learning, especially in wireless communications where sensitive user data is involved. The proposed approach incorporates privacy-enhancing techniques, such as differential privacy, to protect user privacy during the model training process. By adding carefully calibrated noise to the gradient updates, the proposed approach ensures that the privacy of individual devices is preserved without compromising the overall model accuracy. Moreover, the approach considers the heterogeneity of devices in terms of computational capabilities and energy constraints, allowing devices to adaptively adjust the level of privacy preservation to strike a balance between privacy and utility. Thirdly, efficient resource allocation is crucial for federated learning in wireless communications, as devices operate under limited bandwidth, energy, and computational resources. The proposed approach leverages optimization techniques to allocate resources effectively among the participating devices, considering factors such as data quality, network conditions, and device capabilities. By intelligently distributing the computational load, communication bandwidth, and energy consumption, the proposed approach minimizes resource wastage and ensures a fair and efficient FL process in wireless networks. To evaluate the performance of the proposed fair federated learning approach, extensive simulations and experiments will be conducted. The experiments will involve a diverse set of wireless devices, ranging from smartphones to Internet of Things (IoT) devices, operating in various scenarios with different data distributions and network conditions. The evaluation metrics will include model accuracy, fairness measures, privacy preservation, and resource utilization. The expected outcomes of this research include improved model performance, fair allocation of resources, enhanced privacy preservation, and a better understanding of the challenges and solutions for fair federated learning in wireless communications. The proposed approach has the potential to revolutionize wireless communication systems by enabling intelligent applications while addressing fairness concerns and preserving user privacy.

Keywords: federated learning, wireless communications, fairness, imbalanced data, privacy preservation, resource allocation, differential privacy, optimization

Procedia PDF Downloads 75
27979 Enhanced Analysis of Spatial Morphological Cognitive Traits in Lidukou Village through the Application of Space Syntax

Authors: Man Guo

Abstract:

This paper delves into the intricate interplay between spatial morphology and spatial cognition in Lidukou Village, utilizing a combined approach of spatial syntax and field data. Through a comparative analysis of the gathered data, it emerges that the spatial integration level of Lidukou Village exhibits a direct positive correlation with the spatial cognitive preferences of its inhabitants. Specifically, the areas within the village that exhibit a higher degree of spatial cognition are predominantly distributed along the axis primarily defined by Shuxiang Road. However, the accessibility to historical relics remains limited, lacking a coherent systemic relationship. To address the morphological challenges faced by Lidukou Village, this study proposes optimization strategies that encompass diverse perspectives, including the refinement of spatial mechanisms and the shaping of strategic spatial nodes.

Keywords: traditional villages, spatial syntax, spatial integration degree, morphological problem

Procedia PDF Downloads 43
27978 Probabilistic Approach of Dealing with Uncertainties in Distributed Constraint Optimization Problems and Situation Awareness for Multi-agent Systems

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

In this paper, we describe how Bayesian inferential reasoning will contributes in obtaining a well-satisfied prediction for Distributed Constraint Optimization Problems (DCOPs) with uncertainties. We also demonstrate how DCOPs could be merged to multi-agent knowledge understand and prediction (i.e. Situation Awareness). The DCOPs functions were merged with Bayesian Belief Network (BBN) in the form of situation, awareness, and utility nodes. We describe how the uncertainties can be represented to the BBN and make an effective prediction using the expectation-maximization algorithm or conjugate gradient descent algorithm. The idea of variable prediction using Bayesian inference may reduce the number of variables in agents’ sampling domain and also allow missing variables estimations. Experiment results proved that the BBN perform compelling predictions with samples containing uncertainties than the perfect samples. That is, Bayesian inference can help in handling uncertainties and dynamism of DCOPs, which is the current issue in the DCOPs community. We show how Bayesian inference could be formalized with Distributed Situation Awareness (DSA) using uncertain and missing agents’ data. The whole framework was tested on multi-UAV mission for forest fire searching. Future work focuses on augmenting existing architecture to deal with dynamic DCOPs algorithms and multi-agent information merging.

Keywords: DCOP, multi-agent reasoning, Bayesian reasoning, swarm intelligence

Procedia PDF Downloads 119
27977 Changes in Textural Properties of Zucchini Slices Under Effects of Partial Predrying and Deep-Fat-Frying

Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner

Abstract:

Changes in textural properties of any food material during processing is significant for further consumer’s evaluation and directly affects their decisions. Thus any food material should be considered in terms of textural properties after any process. In the present study zucchini slices were partially predried to control and reduce the product’s final oil content. A conventional oven was used for partially dehydration of zucchini slices. Following frying was carried in an industrial fryer having temperature controller. This study was based on the effect of this predrying process on textural properties of fried zucchini slices. Texture profile analysis was performed. Hardness, elasticity, chewiness, cohesiveness were studied texture parameters of fried zucchini slices. Temperature and weight loss were monitored parameters of predrying process, whereas, in frying, oil temperature and process time were controlled. Optimization of two successive processes was done by response surface methodology being one of the common used statistical process optimization tools. Models developed for each texture parameters displayed high success to predict their values as a function of studied processes’ conditions. Process optimization was performed according to target values for each property determined for directly fried zucchini slices taking the highest score from sensory evaluation. Results indicated that textural properties of predried and then fried zucchini slices could be controlled by well-established equations. This is thought to be significant for fried stuff related food industry, where controlling of sensorial properties are crucial to lead consumer’s perception and texture related ones are leaders. This project (113R015) has been supported by TUBITAK.

Keywords: optimization, response surface methodology, texture profile analysis, conventional oven, modelling

Procedia PDF Downloads 433
27976 Knowledge Transformation Flow (KTF) of Visually Impaired Students: The Virtual Knowledge System as a New Service Innovation

Authors: Chatcai Tangsri, Onjaree Na-Takuatoong

Abstract:

This paper aims to present the key factors that support the decision to use the technology and to present the knowledge transformation flow of visually impaired students after the use of virtual knowledge system as proposed as a new service innovation to universities in Thailand. Correspondents of 27 visually impaired students are involved in this research. Total of 25 students are selected from the University that mainly conducts non-classroom teaching environment; while another 2 visually impaired students are selected from classroom teaching environment. All of them are fully involved in the study along 8 weeks duration. All correspondents are classified into 5 small groups in various conditions. The research results revealed that the involvement from knowledge facilitator can push out for the behavioral actual use of the virtual knowledge system although there is no any developed intention to use behaviors. Secondly, the situations that the visually impaired students inadequate of the knowledge sources that usually provided by assistants i.e. peers, audio files etc. In this case, they will use the virtual knowledge system for both knowledge access and knowledge transfer request. With this evidence, the need of knowledge would play a stronger role than all technology acceptance factors. Finally, this paper revealed that the knowledge transfer in the normal method that students have a chance to physically meet up is still confirmed as their preference method. In term of other aspects of technology acceptance, it will be discussed together with challenges and recommendations at the end of this paper.

Keywords: knowledge system, visually impaired students, higher education, knowledge management enable technology, synchronous/asynchronous knowledge access, synchronous/asynchronous knowledge transfer

Procedia PDF Downloads 355
27975 Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique

Authors: C. Manjula, Lilly Florence

Abstract:

Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.

Keywords: decision tree, genetic algorithm, machine learning, software defect prediction

Procedia PDF Downloads 329
27974 Updating Stochastic Hosting Capacity Algorithm for Voltage Optimization Programs and Interconnect Standards

Authors: Nicholas Burica, Nina Selak

Abstract:

The ADHCAT (Automated Distribution Hosting Capacity Assessment Tool) was designed to run Hosting Capacity Analysis on the ComEd system via a stochastic DER (Distributed Energy Resource) placement on multiple power flow simulations against a set of violation criteria. The violation criteria in the initial version of the tool captured a limited amount of issues that individual departments design against for DER interconnections. Enhancements were made to the tool to further align with individual department violation and operation criteria, as well as the addition of new modules for use for future load profile analysis. A reporting engine was created for future analytical use based on the simulations and observations in the tool.

Keywords: distributed energy resources, hosting capacity, interconnect, voltage optimization

Procedia PDF Downloads 190
27973 The Influence of Step and Fillet Shape on Nozzle Endwall Heat Transfer

Authors: Jeong Ju Kim, Hee Yoon Chung, Dong Ho Rhee, Hyung Hee Cho

Abstract:

There is a gap at combustor-turbine interface where leakage flow comes out to prevent hot gas ingestion into the gas turbine nozzle platform. The leakage flow protects the nozzle endwall surface from the hot gas coming from combustor exit. For controlling flow’s stream, the gap’s geometry is transformed by changing fillet radius size. During the operation, step configuration is occurred that was unintended between combustor-turbine platform interface caused by thermal expansion or mismatched assembly. In this study, CFD simulations were performed to investigate the effect of the fillet and step on heat transfer and film cooling effectiveness on the nozzle platform. The Reynolds-averaged Navier-stokes equation was solved with turbulence model, SST k-omega. With the fillet configuration, predicted film cooling effectiveness results indicated that fillet radius size influences to enhance film cooling effectiveness. Predicted film cooling effectiveness results at forward facing step configuration indicated that step height influences to enhance film cooling effectiveness. We suggested that designer change a combustor-turbine interface configuration which was varied by fillet radius size near endwall gap when there was a step at combustor-turbine interface. Gap shape was modified by increasing fillet radius size near nozzle endwall. Also, fillet radius and step height were interacted with the film cooling effectiveness and heat transfer on endwall surface.

Keywords: gas turbine, film cooling effectiveness, endwall, fillet

Procedia PDF Downloads 363
27972 Modelling Heat Transfer Characteristics in the Pasteurization Process of Medium Long Necked Bottled Beers

Authors: S. K. Fasogbon, O. E. Oguegbu

Abstract:

Pasteurization is one of the most important steps in the preservation of beer products, which improves its shelf life by inactivating almost all the spoilage organisms present in it. However, there is no gain saying the fact that it is always difficult to determine the slowest heating zone, the temperature profile and pasteurization units inside bottled beer during pasteurization, hence there had been significant experimental and ANSYS fluent approaches on the problem. This work now developed Computational fluid dynamics model using COMSOL Multiphysics. The model was simulated to determine the slowest heating zone, temperature profile and pasteurization units inside the bottled beer during the pasteurization process. The results of the simulation were compared with the existing data in the literature. The results showed that, the location and size of the slowest heating zone is dependent on the time-temperature combination of each zone. The results also showed that the temperature profile of the bottled beer was found to be affected by the natural convection resulting from variation in density during pasteurization process and that the pasteurization unit increases with time subject to the temperature reached by the beer. Although the results of this work agreed with literatures in the aspects of slowest heating zone and temperature profiles, the results of pasteurization unit however did not agree. It was suspected that this must have been greatly affected by the bottle geometry, specific heat capacity and density of the beer in question. The work concludes that for effective pasteurization to be achieved, there is a need to optimize the spray water temperature and the time spent by the bottled product in each of the pasteurization zones.

Keywords: modeling, heat transfer, temperature profile, pasteurization process, bottled beer

Procedia PDF Downloads 203
27971 A Human Factors Approach to Workload Optimization for On-Screen Review Tasks

Authors: Christina Kirsch, Adam Hatzigiannis

Abstract:

Rail operators and maintainers worldwide are increasingly replacing walking patrols in the rail corridor with mechanized track patrols -essentially data capture on trains- and on-screen reviews of track infrastructure in centralized review facilities. The benefit is that infrastructure workers are less exposed to the dangers of the rail corridor. The impact is a significant change in work design from walking track sections and direct observation in the real world to sedentary jobs in the review facility reviewing captured data on screens. Defects in rail infrastructure can have catastrophic consequences. Reviewer performance regarding accuracy and efficiency of reviews within the available time frame is essential to ensure safety and operational performance. Rail operators must optimize workload and resource loading to transition to on-screen reviews successfully. Therefore, they need to know what workload assessment methodologies will provide reliable and valid data to optimize resourcing for on-screen reviews. This paper compares objective workload measures, including track difficulty ratings and review distance covered per hour, and subjective workload assessments (NASA TLX) and analyses the link between workload and reviewer performance, including sensitivity, precision, and overall accuracy. An experimental study was completed with eight on-screen reviewers, including infrastructure workers and engineers, reviewing track sections with different levels of track difficulty over nine days. Each day the reviewers completed four 90-minute sessions of on-screen inspection of the track infrastructure. Data regarding the speed of review (km/ hour), detected defects, false negatives, and false positives were collected. Additionally, all reviewers completed a subjective workload assessment (NASA TLX) after each 90-minute session and a short employee engagement survey at the end of the study period that captured impacts on job satisfaction and motivation. The results showed that objective measures for tracking difficulty align with subjective mental demand, temporal demand, effort, and frustration in the NASA TLX. Interestingly, review speed correlated with subjective assessments of physical and temporal demand, but to mental demand. Subjective performance ratings correlated with all accuracy measures and review speed. The results showed that subjective NASA TLX workload assessments accurately reflect objective workload. The analysis of the impact of workload on performance showed that subjective mental demand correlated with high precision -accurately detected defects, not false positives. Conversely, high temporal demand was negatively correlated with sensitivity and the percentage of detected existing defects. Review speed was significantly correlated with false negatives. With an increase in review speed, accuracy declined. On the other hand, review speed correlated with subjective performance assessments. Reviewers thought their performance was higher when they reviewed the track sections faster, despite the decline in accuracy. The study results were used to optimize resourcing and ensure that reviewers had enough time to review the allocated track sections to improve defect detection rates in accordance with the efficiency-thoroughness trade-off. Overall, the study showed the importance of a multi-method approach to workload assessment and optimization, combining subjective workload assessments with objective workload and performance measures to ensure that recommendations for work system optimization are evidence-based and reliable.

Keywords: automation, efficiency-thoroughness trade-off, human factors, job design, NASA TLX, performance optimization, subjective workload assessment, workload analysis

Procedia PDF Downloads 121
27970 Design Optimization and Thermoacoustic Analysis of Pulse Tube Cryocooler Components

Authors: K. Aravinth, C. T. Vignesh

Abstract:

The usage of pulse tube cryocoolers is significantly increased mainly due to the advantage of the absence of moving parts. The underlying idea of this project is to optimize the design of pulse tube, regenerator, a resonator in cryocooler and analyzing the thermo-acoustic oscillations with respect to the design parameters. Computational Fluid Dynamic (CFD) model with time-dependent validation is done to predict its performance. The continuity, momentum, and energy equations are solved for various porous media regions. The effect of changing the geometries and orientation will be validated and investigated in performance. The pressure, temperature and velocity fields in the regenerator and pulse tube are evaluated. This optimized design performance results will be compared with the existing pulse tube cryocooler design. The sinusoidal behavior of cryocooler in acoustic streaming patterns in pulse tube cryocooler will also be evaluated.

Keywords: acoustics, cryogenics, design, optimization

Procedia PDF Downloads 175
27969 Optimization of Flexible Job Shop Scheduling Problem with Sequence-Dependent Setup Times Using Genetic Algorithm Approach

Authors: Sanjay Kumar Parjapati, Ajai Jain

Abstract:

This paper presents optimization of makespan for ‘n’ jobs and ‘m’ machines flexible job shop scheduling problem with sequence dependent setup time using genetic algorithm (GA) approach. A restart scheme has also been applied to prevent the premature convergence. Two case studies are taken into consideration. Results are obtained by considering crossover probability (pc = 0.85) and mutation probability (pm = 0.15). Five simulation runs for each case study are taken and minimum value among them is taken as optimal makespan. Results indicate that optimal makespan can be achieved with more than one sequence of jobs in a production order.

Keywords: flexible job shop, genetic algorithm, makespan, sequence dependent setup times

Procedia PDF Downloads 332
27968 Beyond Information Failure and Misleading Beliefs in Conditional Cash Transfer Programs: A Qualitative Account of Structural Barriers Explaining Why the Poor Do Not Invest in Human Capital in Northern Mexico

Authors: Francisco Fernandez de Castro

Abstract:

The Conditional Cash Transfer (CCT) model gives monetary transfers to beneficiary families on the condition that they take specific education and health actions. According to the economic rationale of CCTs the poor need incentives to invest in their human capital because they are trapped by a lack of information and misleading beliefs. If left to their own decision, the poor will not be able to choose what is in their best interests. The basic assumption of the CCT model is that the poor need incentives to take care of their own education and health-nutrition. Due to the incentives (income cash transfers and conditionalities), beneficiary families are supposed to attend doctor visits and health talks. Children would stay in the school. These incentivized behaviors would produce outcomes such as better health and higher level of education, which in turn will reduce poverty. Based on a grounded theory approach to conduct a two-year period of qualitative data collection in northern Mexico, this study shows that this explanation is incomplete. In addition to the information failure and inadequate beliefs, there are structural barriers in everyday life of households that make health-nutrition and education investments difficult. In-depth interviews and observation work showed that the program takes for granted local conditions in which beneficiary families should fulfill their co-responsibilities. Data challenged the program’s assumptions and unveiled local obstacles not contemplated in the program’s design. These findings have policy and research implications for the CCT agenda. They bring elements for late programming due to the gap between the CCT strategy as envisioned by policy designers, and the program that beneficiary families experience on the ground. As for research consequences, these findings suggest new avenues for scholarly work regarding the causal mechanisms and social processes explaining CCT outcomes.

Keywords: conditional cash transfers, incentives, poverty, structural barriers

Procedia PDF Downloads 113
27967 Optimized Design, Material Selection, and Improvement of Liners, Mother Plate, and Stone Box of a Direct Charge Transfer Chute in a Sinter Plant: A Computational Approach

Authors: Anamitra Ghosh, Neeladri Paul

Abstract:

The present work aims at investigating material combinations and thereby improvising an optimized design of liner-mother plate arrangement and that of the stone box, such that it has low cost, high weldability, sufficiently capable of withstanding the increased amount of corrosive shear and bending loads, and having reduced thermal expansion coefficient at temperatures close to 1000 degrees Celsius. All the above factors have been preliminarily examined using a computational approach via ANSYS Thermo-Structural Computation, a commercial software that uses the Finite Element Method to analyze the response of simulated design specimens of liner-mother plate arrangement and the stone box, to varied bending, shear, and thermal loads as well as to determine the temperature gradients developed across various surfaces of the designs. Finally, the optimized structural designs of the liner-mother plate arrangement and that of the stone box with improved material and better structural and thermal properties are selected via trial-and-error method. The final improvised design is therefore considered to enhance the overall life and reliability of a Direct Charge Transfer Chute that transfers and segregates the hot sinter onto the cooler in a sinter plant.

Keywords: shear, bending, thermal, sinter, simulated, optimized, charge, transfer, chute, expansion, computational, corrosive, stone box, liner, mother plate, arrangement, material

Procedia PDF Downloads 109
27966 Computational Analysis on Thermal Performance of Chip Package in Electro-Optical Device

Authors: Long Kim Vu

Abstract:

The central processing unit in Electro-Optical devices is a Field-programmable gate array (FPGA) chip package allowing flexible, reconfigurable computing but energy consumption. Because chip package is placed in isolated devices based on IP67 waterproof standard, there is no air circulation and the heat dissipation is a challenge. In this paper, the author successfully modeled a chip package which various interposer materials such as silicon, glass and organics. Computational fluid dynamics (CFD) was utilized to analyze the thermal performance of chip package in the case of considering comprehensive heat transfer modes: conduction, convection and radiation, which proposes equivalent heat dissipation. The logic chip temperature varying with time is compared between the simulation and experiment results showing the excellent correlation, proving the reasonable chip modeling and simulation method.

Keywords: CFD, FPGA, heat transfer, thermal analysis

Procedia PDF Downloads 184
27965 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel

Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler

Abstract:

Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.

Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process

Procedia PDF Downloads 135
27964 Research on the Optimization of Satellite Mission Scheduling

Authors: Pin-Ling Yin, Dung-Ying Lin

Abstract:

Satellites play an important role in our daily lives, from monitoring the Earth's environment and providing real-time disaster imagery to predicting extreme weather events. As technology advances and demands increase, the tasks undertaken by satellites have become increasingly complex, with more stringent resource management requirements. A common challenge in satellite mission scheduling is the limited availability of resources, including onboard memory, ground station accessibility, and satellite power. In this context, efficiently scheduling and managing the increasingly complex satellite missions under constrained resources has become a critical issue that needs to be addressed. The core of Satellite Onboard Activity Planning (SOAP) lies in optimizing the scheduling of the received tasks, arranging them on a timeline to form an executable onboard mission plan. This study aims to develop an optimization model that considers the various constraints involved in satellite mission scheduling, such as the non-overlapping execution periods for certain types of tasks, the requirement that tasks must fall within the contact range of specified types of ground stations during their execution, onboard memory capacity limits, and the collaborative constraints between different types of tasks. Specifically, this research constructs a mixed-integer programming mathematical model and solves it with a commercial optimization package. Simultaneously, as the problem size increases, the problem becomes more difficult to solve. Therefore, in this study, a heuristic algorithm has been developed to address the challenges of using commercial optimization package as the scale increases. The goal is to effectively plan satellite missions, maximizing the total number of executable tasks while considering task priorities and ensuring that tasks can be completed as early as possible without violating feasibility constraints. To verify the feasibility and effectiveness of the algorithm, test instances of various sizes were generated, and the results were validated through feedback from on-site users and compared against solutions obtained from a commercial optimization package. Numerical results show that the algorithm performs well under various scenarios, consistently meeting user requirements. The satellite mission scheduling algorithm proposed in this study can be flexibly extended to different types of satellite mission demands, achieving optimal resource allocation and enhancing the efficiency and effectiveness of satellite mission execution.

Keywords: mixed-integer programming, meta-heuristics, optimization, resource management, satellite mission scheduling

Procedia PDF Downloads 25
27963 Iterative Replanning of Diesel Generator and Energy Storage System for Stable Operation of an Isolated Microgrid

Authors: Jiin Jeong, Taekwang Kim, Kwang Ryel Ryu

Abstract:

The target microgrid in this paper is isolated from the large central power system and is assumed to consist of wind generators, photovoltaic power generators, an energy storage system (ESS), a diesel power generator, the community load, and a dump load. The operation of such a microgrid can be hazardous because of the uncertain prediction of power supply and demand and especially due to the high fluctuation of the output from the wind generators. In this paper, we propose an iterative replanning method for determining the appropriate level of diesel generation and the charging/discharging cycles of the ESS for the upcoming one-hour horizon. To cope with the uncertainty of the estimation of supply and demand, the one-hour plan is built repeatedly in the regular interval of one minute by rolling the one-hour horizon. Since the plan should be built with a sufficiently large safe margin to avoid any possible black-out, some energy waste through the dump load is inevitable. In our approach, the level of safe margin is optimized through learning from the past experience. The simulation experiments show that our method combined with the margin optimization can reduce the dump load compared to the method without such optimization.

Keywords: microgrid, operation planning, power efficiency optimization, supply and demand prediction

Procedia PDF Downloads 432
27962 The Impact of Transaction Costs on Rebalancing an Investment Portfolio in Portfolio Optimization

Authors: B. Marasović, S. Pivac, S. V. Vukasović

Abstract:

Constructing a portfolio of investments is one of the most significant financial decisions facing individuals and institutions. In accordance with the modern portfolio theory maximization of return at minimal risk should be the investment goal of any successful investor. In addition, the costs incurred when setting up a new portfolio or rebalancing an existing portfolio must be included in any realistic analysis. In this paper rebalancing an investment portfolio in the presence of transaction costs on the Croatian capital market is analyzed. The model applied in the paper is an extension of the standard portfolio mean-variance optimization model in which transaction costs are incurred to rebalance an investment portfolio. This model allows different costs for different securities, and different costs for buying and selling. In order to find efficient portfolio, using this model, first, the solution of quadratic programming problem of similar size to the Markowitz model, and then the solution of a linear programming problem have to be found. Furthermore, in the paper the impact of transaction costs on the efficient frontier is investigated. Moreover, it is shown that global minimum variance portfolio on the efficient frontier always has the same level of the risk regardless of the amount of transaction costs. Although efficient frontier position depends of both transaction costs amount and initial portfolio it can be concluded that extreme right portfolio on the efficient frontier always contains only one stock with the highest expected return and the highest risk.

Keywords: Croatian capital market, Markowitz model, fractional quadratic programming, portfolio optimization, transaction costs

Procedia PDF Downloads 385
27961 Simulation of a Fluid Catalytic Cracking Process

Authors: Sungho Kim, Dae Shik Kim, Jong Min Lee

Abstract:

Fluid catalytic cracking (FCC) process is one of the most important process in modern refinery indusrty. This paper focuses on the fluid catalytic cracking (FCC) process. As the FCC process is difficult to model well, due to its nonlinearities and various interactions between its process variables, rigorous process modeling of whole FCC plant is demanded for control and plant-wide optimization of the plant. In this study, a process design for the FCC plant includes riser reactor, main fractionator, and gas processing unit was developed. A reactor model was described based on four-lumped kinetic scheme. Main fractionator, gas processing unit and other process units are designed to simulate real plant data, using a process flowsheet simulator, Aspen PLUS. The custom reactor model was integrated with the process flowsheet simulator to develop an integrated process model.

Keywords: fluid catalytic cracking, simulation, plant data, process design

Procedia PDF Downloads 457