Search results for: delays resulting from two separate causes at the same time
19769 Investigation of Compressive Strength of Fly Ash-Based Geopolymer Bricks with Hierarchical Bayesian Path Analysis
Authors: Ersin Sener, Ibrahim Demir, Hasan Aykut Karaboga, Kadir Kilinc
Abstract:
Bayesian methods, which have very wide range of applications, are implemented to the data obtained from the production of F class fly ash-based geopolymer bricks’ experimental design. In this study, dependent variable is compressive strength, independent variables are treatment type (oven and steam), treatment time, molding time, temperature, water absorbtion ratio and density. The effect of independent variables on compressive strength is investigated. There is no difference among treatment types, but there is a correlation between independent variables. Therefore, hierarchical Bayesian path analysis is applied. In consequence of analysis we specified that treatment time, temperature and density effects on compressive strength is higher, molding time, and water absorbtion ratio is relatively low.Keywords: experimental design, F class fly ash, geopolymer bricks, hierarchical Bayesian path analysis
Procedia PDF Downloads 38719768 Evaluating the Implementation of a Quality Management System in the COVID-19 Diagnostic Laboratory of a Tertiary Care Hospital in Delhi
Authors: Sukriti Sabharwal, Sonali Bhattar, Shikhar Saxena
Abstract:
Introduction: COVID-19 molecular diagnostic laboratory is the cornerstone of the COVID-19 disease diagnosis as the patient’s treatment and management protocol depend on the molecular results. For this purpose, it is extremely important that the laboratory conducting these results adheres to the quality management processes to increase the accuracy and validity of the reports generated. We started our own molecular diagnostic setup at the onset of the pandemic. Therefore, we conducted this study to generate our quality management data to help us in improving on our weak points. Materials and Methods: A total of 14561 samples were evaluated by the retrospective observational method. The quality variables analysed were classified into pre-analytical, analytical, and post-analytical variables, and the results were presented in percentages. Results: Among the pre-analytical variables, sample leaking was the most common cause of the rejection of samples (134/14561, 0.92%), followed by non-generation of SRF ID (76/14561, 0.52%) and non-compliance to triple packaging (44/14561, 0.3%). The other pre-analytical aspects assessed were incomplete patient identification (17/14561, 0.11%), insufficient quantity of samples (12/14561, 0.08%), missing forms/samples (7/14561, 0.04%), samples in the wrong vials/empty VTM tubes (5/14561, 0.03%) and LIMS entry not done (2/14561, 0.01%). We are unable to obtain internal quality control in 0.37% of samples (55/14561). We also experienced two incidences of cross-contamination among the samples resulting in false-positive results. Among the post-analytical factors, a total of 0.07% of samples (11/14561) could not be dispatched within the stipulated time frame. Conclusion: Adherence to quality control processes is foremost for the smooth running of any diagnostic laboratory, especially the ones involved in critical reporting. Not only do the indicators help in keeping in check the laboratory parameters but they also allow comparison with other laboratories.Keywords: laboratory quality management, COVID-19, molecular diagnostics, healthcare
Procedia PDF Downloads 16419767 Performance Evaluation of the Classic seq2seq Model versus a Proposed Semi-supervised Long Short-Term Memory Autoencoder for Time Series Data Forecasting
Authors: Aswathi Thrivikraman, S. Advaith
Abstract:
The study is aimed at designing encoders for deciphering intricacies in time series data by redescribing the dynamics operating on a lower-dimensional manifold. A semi-supervised LSTM autoencoder is devised and investigated to see if the latent representation of the time series data can better forecast the data. End-to-end training of the LSTM autoencoder, together with another LSTM network that is connected to the latent space, forces the hidden states of the encoder to represent the most meaningful latent variables relevant for forecasting. Furthermore, the study compares the predictions with those of a traditional seq2seq model.Keywords: LSTM, autoencoder, forecasting, seq2seq model
Procedia PDF Downloads 15619766 Effect of Rehabilitation on Outcomes for Persons with Traumatic Brain Injury: Results from a Single Center
Authors: Savaş Karpuz, Sami Küçükşen
Abstract:
The aim of this study is to investigate the effectiveness of neurological rehabilitation in patients with traumatic brain injury. Participants were 45 consecutive adults with traumatic brain injury who were received the neurologic rehabilitation. Sociodemographic characteristics of the patients, the cause of the injury, the duration of the coma and posttraumatic amnesia, the length of stay in the other inpatient clinics before rehabilitation, the time between injury and admission to the rehabilitation clinic, and the length of stay in the rehabilitation clinic were recorded. The differences in functional status between admission and discharge were determined with Disability Rating Scale (DRS), Functional Independence Measure (FIM), and Functional Ambulation Scale (FAS) and levels of cognitive functioning determined with Ranchos Los Amigos Scale (RLAS). According to admission time, there was a significant improvement identified in functional status of patients who had been given the intensive in-hospital cognitive rehabilitation program. At discharge time, the statistically significant differences were obtained in DRS, FIM, FAS and RLAS scores according to admission time. Better improvement in functional status was detected in patients with lower scores in DRS, and higher scores FIM and RLAS scores at the entry time. The neurologic rehabilitation significantly affects the recovery of functional status after traumatic brain injury.Keywords: traumatic brain injury, rehabilitation, functional status, neurological
Procedia PDF Downloads 22919765 Sliding Mode Control for Active Suspension System with Actuator Delay
Authors: Aziz Sezgin, Yuksel Hacioglu, Nurkan Yagiz
Abstract:
Sliding mode controller for a vehicle active suspension system is designed in this study. The widely used quarter car model is preferred and it is aimed to improve the ride comfort of the passengers. The effect of the actuator time delay, which may arise due to the information processing, sensors or actuator dynamics, is also taken into account during the design of the controller. A sliding mode controller was designed that has taken into account the actuator time delay by using Smith predictor. The successful performance of the designed controller is confirmed via numerical results.Keywords: sliding mode control, active suspension system, actuator, time delay, vehicle
Procedia PDF Downloads 40919764 Optimization of Processing Parameters of Acrylonitrile–Butadiene–Styrene Sheets Integrated by Taguchi Method
Authors: Fatemeh Sadat Miri, Morteza Ehsani, Seyed Farshid Hosseini
Abstract:
The present research is concerned with the optimization of extrusion parameters of ABS sheets by the Taguchi experimental design method. In this design method, three parameters of % recycling ABS, processing temperature and degassing time on mechanical properties, hardness, HDT, and color matching of ABS sheets were investigated. The variations of this research are the dosage of recycling ABS, processing temperature, and degassing time. According to experimental test data, the highest level of tensile strength and HDT belongs to the sample with 5% recycling ABS, processing temperature of 230°C, and degassing time of 3 hours. Additionally, the minimum level of MFI and color matching belongs to this sample, too. The present results are in good agreement with the Taguchi method. Based on the outcomes of the Taguchi design method, degassing time has the most effect on the mechanical properties of ABS sheets.Keywords: ABS, process optimization, Taguchi, mechanical properties
Procedia PDF Downloads 7319763 Optimizing Fire Suppression Time in Buildings by Forming a Fire Feedback Loop
Authors: Zhdanova A. O., Volkov R. S., Kuznetsov G. V., Strizhak P. A.
Abstract:
Fires in different types of facilities are a serious problem worldwide.It is still an unaccomplished science and technology objective to establish the minimum number and type of sensors in automatic systems of compartment fire suppression which would turn the fire-extinguishing agent spraying on and off in real time depending on the state of the fire, minimize the amount of agent applied, delay time in fire suppression and system response, as well as the time of combustion suppression. Based on the results of experimental studies, the conclusion was made that it is reasonable to use a gas analysis system and heat sensors (in the event of their prior activation) to determine the effectiveness of fire suppression (fire-extinguishing composition interacts with the fire). Thus, the concentration of CO in the interaction of the firefighting liquid with the fire increases to 0.7–1.2%, which indicates a slowdown in the flame combustion, and heat sensors stop responding at a gas medium temperature below 80 ºC, which shows a gradual decrease in the heat release from the fire. The evidence from this study suggests that the information received from the video recording equipment (video camera) should be used in real time as an additional parameter confirming fire suppression. Research was supported by Russian Science Foundation (project No 21-19-00009, https://rscf.ru/en/project/21-19-00009/).Keywords: compartment fires, fire suppression, continuous control of fire behavior, feedback systems
Procedia PDF Downloads 12919762 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability
Authors: Chin-Chia Jane
Abstract:
In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.Keywords: quality of service, reliability, transportation network, travel time
Procedia PDF Downloads 22119761 Photocatalytic Degradation of Methylene Blue Dye Using Cuprous Oxide/Graphene Nanocomposite
Authors: Bekan Bogale, Tsegaye Girma Asere, Tilahun Yai, Fekadu Melak
Abstract:
Aims: To study photocatalytic degradation of methylene blue dye on cuprous oxide/graphene nanocomposite. Background: Cuprous oxide (Cu2O) nanoparticles are among the metal oxides that demonstrated photocatalytic activity. However, the stability of Cu2O nanoparticles due to the fast recombination rate of electron/hole pairs remains a significant challenge in their photocatalytic applications. This, in turn, leads to mismatching of the effective bandgap separation, tending to reduce the photocatalytic activity of the desired organic waste (MB). To overcome these limitations, graphene has been combined with cuprous oxides, resulting in cuprous oxide/graphene nanocomposite as a promising photocatalyst. Objective: In this study, Cu2O/graphene nanocomposite was synthesized and evaluated for its photocatalytic performance of methylene blue (MB) dye degradation. Method: Cu2O/graphene nanocomposites were synthesized from graphite powder and copper nitrate using the facile sol-gel method. Batch experiments have been conducted to assess the applications of the nanocomposites for MB degradation. Parameters such as contact time, catalyst dosage, and pH of the solution were optimized for maximum MB degradation. The prepared nanocomposites were characterized by using UV-Vis, FTIR, XRD, and SEM. The photocatalytic performance of Cu2O/graphene nanocomposites was compared against Cu2O nanoparticles for cationic MB dye degradation. Results: Cu2O/graphene nanocomposite exhibits higher photocatalytic activity for MB degradation (with a degradation efficiency of 94%) than pure Cu2O nanoparticles (67%). This has been accomplished after 180 min of irradiation under visible light. The kinetics of MB degradation by Cu2O/graphene composites can be demonstrated by the second-order kinetic model. The synthesized nanocomposite can be used for more than three cycles of photocatalytic MB degradation. Conclusion: This work indicated new insights into Cu2O/graphene nanocomposite as high-performance in photocatalysis to degrade MB, playing a great role in environmental protection in relation to MB dye.Keywords: methylene blue, photocatalysis, cuprous oxide, graphene nanocomposite
Procedia PDF Downloads 18919760 Proactive SoC Balancing of Li-ion Batteries for Automotive Application
Authors: Ali Mashayekh, Mahdiye Khorasani, Thomas weyh
Abstract:
The demand for battery electric vehicles (BEV) is steadily increasing, and it can be assumed that electric mobility will dominate the market for individual transportation in the future. Regarding BEVs, the focus of state-of-the-art research and development is on vehicle batteries since their properties primarily determine vehicles' characteristic parameters, such as price, driving range, charging time, and lifetime. State-of-the-art battery packs consist of invariable configurations of battery cells, connected in series and parallel. A promising alternative is battery systems based on multilevel inverters, which can alter the configuration of the battery cells during operation via semiconductor switches. The main benefit of such topologies is that a three-phase AC voltage can be directly generated from the battery pack, and no separate power inverters are required. Therefore, modular battery systems based on different multilevel inverter topologies and reconfigurable battery systems are currently under investigation. Another advantage of the multilevel concept is that the possibility to reconfigure the battery pack allows battery cells with different states of charge (SoC) to be connected in parallel, and thus low-loss balancing can take place between such cells. In contrast, in conventional battery systems, parallel connected (hard-wired) battery cells are discharged via bleeder resistors to keep the individual SoCs of the parallel battery strands balanced, ultimately reducing the vehicle range. Different multilevel inverter topologies and reconfigurable batteries have been described in the available literature that makes the before-mentioned advantages possible. However, what has not yet been described is how an intelligent operating algorithm needs to look like to keep the SoCs of the individual battery strands of a modular battery system with integrated power electronics balanced. Therefore, this paper suggests an SoC balancing approach for Battery Modular Multilevel Management (BM3) converter systems, which can be similarly used for reconfigurable battery systems or other multilevel inverter topologies with parallel connectivity. The here suggested approach attempts to simultaneously utilize all converter modules (bypassing individual modules should be avoided) because the parallel connection of adjacent modules reduces the phase-strand's battery impedance. Furthermore, the presented approach tries to reduce the number of switching events when changing the switching state combination. Thereby, the ohmic battery losses and switching losses are kept as low as possible. Since no power is dissipated in any designated bleeder resistors and no designated active balancing circuitry is required, the suggested approach can be categorized as a proactive balancing approach. To verify the algorithm's validity, simulations are used.Keywords: battery management system, BEV, battery modular multilevel management (BM3), SoC balancing
Procedia PDF Downloads 12019759 X-Ray Diffraction and Mӧssbauer Studies of Nanostructured Ni45Al45Fe10 Powders Elaborated by Mechanical Alloying
Authors: N. Ammouchi
Abstract:
We have studied the effect of milling time on the structural and hyperfine properties of Ni45Al45Fe10 compound elaborated by mechanical alloying. The elaboration was performed by using the planetary ball mill at different milling times. The as milled powders were characterized by X-ray diffraction (XRD) and Mӧssbauer spectroscopy. From XRD diffraction spectra, we show that the β NiAl(Fe) was completely formed after 24 h of milling time. When the milling time increases, the lattice parameter increases, whereas the grain size decreases to a few nanometres and the mean level of microstrains increases. The analysis of Mӧssbauer spectra indicates that, in addition to a ferromagnetic phase, α-Fe, a paramagnetic disordered phase Ni Al (Fe) solid solution is observed after 2h and only this phase is present after 12h.Keywords: NiAlFe, nanostructured powders, X-ray diffraction, Mӧssbauer spectroscopy
Procedia PDF Downloads 37919758 Fast and Efficient Algorithms for Evaluating Uniform and Nonuniform Lagrange and Newton Curves
Authors: Taweechai Nuntawisuttiwong, Natasha Dejdumrong
Abstract:
Newton-Lagrange Interpolations are widely used in numerical analysis. However, it requires a quadratic computational time for their constructions. In computer aided geometric design (CAGD), there are some polynomial curves: Wang-Ball, DP and Dejdumrong curves, which have linear time complexity algorithms. Thus, the computational time for Newton-Lagrange Interpolations can be reduced by applying the algorithms of Wang-Ball, DP and Dejdumrong curves. In order to use Wang-Ball, DP and Dejdumrong algorithms, first, it is necessary to convert Newton-Lagrange polynomials into Wang-Ball, DP or Dejdumrong polynomials. In this work, the algorithms for converting from both uniform and non-uniform Newton-Lagrange polynomials into Wang-Ball, DP and Dejdumrong polynomials are investigated. Thus, the computational time for representing Newton-Lagrange polynomials can be reduced into linear complexity. In addition, the other utilizations of using CAGD curves to modify the Newton-Lagrange curves can be taken.Keywords: Lagrange interpolation, linear complexity, monomial matrix, Newton interpolation
Procedia PDF Downloads 23419757 Flood Risk Management in the Semi-Arid Regions of Lebanon - Case Study “Semi Arid Catchments, Ras Baalbeck and Fekha”
Authors: Essam Gooda, Chadi Abdallah, Hamdi Seif, Safaa Baydoun, Rouya Hdeib, Hilal Obeid
Abstract:
Floods are common natural disaster occurring in semi-arid regions in Lebanon. This results in damage to human life and deterioration of environment. Despite their destructive nature and their immense impact on the socio-economy of the region, flash floods have not received adequate attention from policy and decision makers. This is mainly because of poor understanding of the processes involved and measures needed to manage the problem. The current understanding of flash floods remains at the level of general concepts; most policy makers have yet to recognize that flash floods are distinctly different from normal riverine floods in term of causes, propagation, intensity, impacts, predictability, and management. Flash floods are generally not investigated as a separate class of event but are rather reported as part of the overall seasonal flood situation. As a result, Lebanon generally lacks policies, strategies, and plans relating specifically to flash floods. Main objective of this research is to improve flash flood prediction by providing new knowledge and better understanding of the hydrological processes governing flash floods in the East Catchments of El Assi River. This includes developing rainstorm time distribution curves that are unique for this type of study region; analyzing, investigating, and developing a relationship between arid watershed characteristics (including urbanization) and nearby villages flow flood frequency in Ras Baalbeck and Fekha. This paper discusses different levels of integration approach¬es between GIS and hydrological models (HEC-HMS & HEC-RAS) and presents a case study, in which all the tasks of creating model input, editing data, running the model, and displaying output results. The study area corresponds to the East Basin (Ras Baalbeck & Fakeha), comprising nearly 350 km2 and situated in the Bekaa Valley of Lebanon. The case study presented in this paper has a database which is derived from Lebanese Army topographic maps for this region. Using ArcMap to digitizing the contour lines, streams & other features from the topographic maps. The digital elevation model grid (DEM) is derived for the study area. The next steps in this research are to incorporate rainfall time series data from Arseal, Fekha and Deir El Ahmar stations to build a hydrologic data model within a GIS environment and to combine ArcGIS/ArcMap, HEC-HMS & HEC-RAS models, in order to produce a spatial-temporal model for floodplain analysis at a regional scale. In this study, HEC-HMS and SCS methods were chosen to build the hydrologic model of the watershed. The model then calibrated using flood event that occurred between 7th & 9th of May 2014 which considered exceptionally extreme because of the length of time the flows lasted (15 hours) and the fact that it covered both the watershed of Aarsal and Ras Baalbeck. The strongest reported flood in recent times lasted for only 7 hours covering only one watershed. The calibrated hydrologic model is then used to build the hydraulic model & assessing of flood hazards maps for the region. HEC-RAS Model is used in this issue & field trips were done for the catchments in order to calibrated both Hydrologic and Hydraulic models. The presented models are a kind of flexible procedures for an ungaged watershed. For some storm events it delivers good results, while for others, no parameter vectors can be found. In order to have a general methodology based on these ideas, further calibration and compromising of results on the dependence of many flood events parameters and catchment properties is required.Keywords: flood risk management, flash flood, semi arid region, El Assi River, hazard maps
Procedia PDF Downloads 47819756 A Multivariate 4/2 Stochastic Covariance Model: Properties and Applications to Portfolio Decisions
Authors: Yuyang Cheng, Marcos Escobar-Anel
Abstract:
This paper introduces a multivariate 4/2 stochastic covariance process generalizing the one-dimensional counterparts presented in Grasselli (2017). Our construction permits stochastic correlation not only among stocks but also among volatilities, also known as co-volatility movements, both driven by more convenient 4/2 stochastic structures. The parametrization is flexible enough to separate these types of correlation, permitting their individual study. Conditions for proper changes of measure and closed-form characteristic functions under risk-neutral and historical measures are provided, allowing for applications of the model to risk management and derivative pricing. We apply the model to an expected utility theory problem in incomplete markets. Our analysis leads to closed-form solutions for the optimal allocation and value function. Conditions are provided for well-defined solutions together with a verification theorem. Our numerical analysis highlights and separates the impact of key statistics on equity portfolio decisions, in particular, volatility, correlation, and co-volatility movements, with the latter being the least important in an incomplete market.Keywords: stochastic covariance process, 4/2 stochastic volatility model, stochastic co-volatility movements, characteristic function, expected utility theory, verication theorem
Procedia PDF Downloads 15219755 Evaluation of Total Antioxidant Activity (TAC) of Copper Oxide Decorated Reduced Graphene Oxide (CuO-rGO) at Different Stirring time
Authors: Aicha Bensouici, Assia Mili, Naouel Rdjem, Nacera Baali
Abstract:
Copper oxide decorated reduced graphene oxide (GO) was obtained successfully using two steps route synthesis was used. Firstly, graphene oxide was obtained using a modified Hummers method by excluding sodium nitrate from starting materials. After washing-centrifugation routine pristine GO was decorated by copper oxide using a refluxation technique at 120°C during 2h, and an equal amount of GO and copper acetate was used. Three CuO-rGO nanocomposite samples types were obtained at 30min, 24h, and 7 day stirring time. TAC results show dose dependent behavior of CuO-rGO and confirm no influence of stirring time on antioxidant properties, 30min is considered as an optimal stirring condition.Keywords: copper oxide, reduced graphene oxide, TAC, GO
Procedia PDF Downloads 10419754 A Fast Silhouette Detection Algorithm for Shadow Volumes in Augmented Reality
Authors: Hoshang Kolivand, Mahyar Kolivand, Mohd Shahrizal Sunar, Mohd Azhar M. Arsad
Abstract:
Real-time shadow generation in virtual environments and Augmented Reality (AR) was always a hot topic in the last three decades. Lots of calculation for shadow generation among AR needs a fast algorithm to overcome this issue and to be capable of implementing in any real-time rendering. In this paper, a silhouette detection algorithm is presented to generate shadows for AR systems. Δ+ algorithm is presented based on extending edges of occluders to recognize which edges are silhouettes in the case of real-time rendering. An accurate comparison between the proposed algorithm and current algorithms in silhouette detection is done to show the reduction calculation by presented algorithm. The algorithm is tested in both virtual environments and AR systems. We think that this algorithm has the potential to be a fundamental algorithm for shadow generation in all complex environments.Keywords: silhouette detection, shadow volumes, real-time shadows, rendering, augmented reality
Procedia PDF Downloads 44319753 The Role of Logistics Services in Influencing Customer Satisfaction and Reviews in an Online Marketplace
Authors: nafees mahbub, blake tindol, utkarsh shrivastava, kuanchin chen
Abstract:
Online shopping has become an integral part of businesses today. Big players such as Amazon are setting the bar for delivery services, and many businesses are working towards meeting them. However, what happens if a seller underestimates or overestimates the delivery time? Does it translate to consumer comments, ratings, or lost sales? Although several prior studies have investigated the impact of poor logistics on customer satisfaction, that impact of under estimation of delivery times has been rarely considered. The study uses real-time customer online purchase data to study the impact of missed delivery times on satisfaction.Keywords: LOST SALES, DELIVERY TIME, CUSTOMER SATISFACTION, CUSTOMER REVIEWS
Procedia PDF Downloads 21419752 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method
Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek
Abstract:
Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow
Procedia PDF Downloads 13319751 Autonomous Quantum Competitive Learning
Authors: Mohammed A. Zidan, Alaa Sagheer, Nasser Metwally
Abstract:
Real-time learning is an important goal that most of artificial intelligence researches try to achieve it. There are a lot of problems and applications which require low cost learning such as learn a robot to be able to classify and recognize patterns in real time and real-time recall. In this contribution, we suggest a model of quantum competitive learning based on a series of quantum gates and additional operator. The proposed model enables to recognize any incomplete patterns, where we can increase the probability of recognizing the pattern at the expense of the undesired ones. Moreover, these undesired ones could be utilized as new patterns for the system. The proposed model is much better compared with classical approaches and more powerful than the current quantum competitive learning approaches.Keywords: competitive learning, quantum gates, quantum gates, winner-take-all
Procedia PDF Downloads 47219750 Time Optimal Control Mode Switching between Detumbling and Pointing in the Early Orbit Phase
Authors: W. M. Ng, O. B. Iskender, L. Simonini, J. M. Gonzalez
Abstract:
A multitude of factors, including mechanical imperfections of the deployment system and separation instance of satellites from launchers, oftentimes results in highly uncontrolled initial tumbling motion immediately after deployment. In particular, small satellites which are characteristically launched as a piggyback to a large rocket, are generally allocated a large time window to complete detumbling within the early orbit phase. Because of the saturation risk of the actuators, current algorithms are conservative to avoid draining excessive power in the detumbling phase. This work aims to enable time-optimal switching of control modes during the early phase, reducing the time required to transit from launch to sun-pointing mode for power budget conscious satellites. This assumes the usage of B-dot controller for detumbling and PD controller for pointing. Nonlinear Euler's rotation equations are used to represent the attitude dynamics of satellites and Commercial-off-the-shelf (COTS) reaction wheels and magnetorquers are used to perform the manoeuver. Simulation results will be based on a spacecraft attitude simulator and the use case will be for multiple orbits of launch deployment general to Low Earth Orbit (LEO) satellites.Keywords: attitude control, detumbling, small satellites, spacecraft autonomy, time optimal control
Procedia PDF Downloads 11719749 Effect of Hydraulic Residence Time on Aromatic Petrochemical Wastewater Treatment Using Pilot-Scale Submerged Membrane Bioreactor
Authors: Fatemeh Yousefi, Narges Fallah, Mohsen Kian, Mehrzad Pakzadeh
Abstract:
The petrochemical complex releases wastewater, which is rich in organic pollutants and could not be treated easily. Treatment of the wastewater from a petrochemical industry has been investigated using a submerged membrane bioreactor (MBR). For this purpose, a pilot-scale submerged MBR with a flat-sheet ultrafiltration membrane was used for treatment of petrochemical wastewater according to Bandar Imam Petrochemical complex (BIPC) Aromatic plant. The testing system ran continuously (24-h) over 6 months. Trials on different membrane fluxes and hydraulic retention time (HRT) were conducted and the performance evaluation of the system was done. During the 167 days operation of the MBR at hydraulic retention time (HRT) of 18, 12, 6, and 3 and at an infinite sludge retention time (SRT), the MBR effluent quality consistently met the requirement for discharge to the environment. A fluxes of 6.51 and 13.02 L m-2 h-1 (LMH) was sustainable and HRT of 6 and 12 h corresponding to these fluxes were applicable. Membrane permeability could be fully recovered after cleaning. In addition, there was no foaming issue in the process. It was concluded that it was feasible to treat the wastewater using submersed MBR technology.Keywords: membrane bioreactor (MBR), petrochemical wastewater, COD removal, biological treatment
Procedia PDF Downloads 52019748 Failure Inference and Optimization for Step Stress Model Based on Bivariate Wiener Model
Authors: Soudabeh Shemehsavar
Abstract:
In this paper, we consider the situation under a life test, in which the failure time of the test units are not related deterministically to an observable stochastic time varying covariate. In such a case, the joint distribution of failure time and a marker value would be useful for modeling the step stress life test. The problem of accelerating such an experiment is considered as the main aim of this paper. We present a step stress accelerated model based on a bivariate Wiener process with one component as the latent (unobservable) degradation process, which determines the failure times and the other as a marker process, the degradation values of which are recorded at times of failure. Parametric inference based on the proposed model is discussed and the optimization procedure for obtaining the optimal time for changing the stress level is presented. The optimization criterion is to minimize the approximate variance of the maximum likelihood estimator of a percentile of the products’ lifetime distribution.Keywords: bivariate normal, Fisher information matrix, inverse Gaussian distribution, Wiener process
Procedia PDF Downloads 31719747 A Framework of Dynamic Rule Selection Method for Dynamic Flexible Job Shop Problem by Reinforcement Learning Method
Authors: Rui Wu
Abstract:
In the volatile modern manufacturing environment, new orders randomly occur at any time, while the pre-emptive methods are infeasible. This leads to a real-time scheduling method that can produce a reasonably good schedule quickly. The dynamic Flexible Job Shop problem is an NP-hard scheduling problem that hybrid the dynamic Job Shop problem with the Parallel Machine problem. A Flexible Job Shop contains different work centres. Each work centre contains parallel machines that can process certain operations. Many algorithms, such as genetic algorithms or simulated annealing, have been proposed to solve the static Flexible Job Shop problems. However, the time efficiency of these methods is low, and these methods are not feasible in a dynamic scheduling problem. Therefore, a dynamic rule selection scheduling system based on the reinforcement learning method is proposed in this research, in which the dynamic Flexible Job Shop problem is divided into several parallel machine problems to decrease the complexity of the dynamic Flexible Job Shop problem. Firstly, the features of jobs, machines, work centres, and flexible job shops are selected to describe the status of the dynamic Flexible Job Shop problem at each decision point in each work centre. Secondly, a framework of reinforcement learning algorithm using a double-layer deep Q-learning network is applied to select proper composite dispatching rules based on the status of each work centre. Then, based on the selected composite dispatching rule, an available operation is selected from the waiting buffer and assigned to an available machine in each work centre. Finally, the proposed algorithm will be compared with well-known dispatching rules on objectives of mean tardiness, mean flow time, mean waiting time, or mean percentage of waiting time in the real-time Flexible Job Shop problem. The result of the simulations proved that the proposed framework has reasonable performance and time efficiency.Keywords: dynamic scheduling problem, flexible job shop, dispatching rules, deep reinforcement learning
Procedia PDF Downloads 10819746 Preparation and Characterization of Recycled Polyethylene Terephthalate/Polypropylene Blends from Automotive Textile Waste for Use in the Furniture Edge Banding Sector
Authors: Merve Ozer, Tolga Gokkurt, Yasemen Gokkurt, Ezgi Bozbey
Abstract:
In this study, we investigated the recovery of Polyethylene terephthalate/Polypropylene (PET/PP)-containing automotive textile waste from post-product and post-consumer phases in the automotive sector according to the upcycling technique and the methods of formulation and production that would allow these wastes to be substituted as PP/PET alloys instead of original PP raw materials used in plastic edge band production. The laminated structure of the stated wastes makes it impossible to separate the incompatible PP and PET phases in content and thus produce a quality raw material or product as a result of recycling. Within the scope of a two-stage production process, a comprehensive process was examined using block copolymers and maleic grafted copolymers with different features to ensure that these two incompatible phases are compatible. The mechanical, thermal, and morphological properties of the plastic raw materials, which will be referred to as PP/PET blends obtained as a result of the process, were examined in detail and discussed their substitutability instead of the original raw materials.Keywords: mechanical recycling, melt blending, plastic blends, polyethylene, polypropylene, recycling of plastics, terephthalate, twin screw extruders
Procedia PDF Downloads 7219745 The Potential Role of Industrialized Building Systems in Malaysian Sustainable Construction: Awareness and Barriers
Authors: Aawag Mohsen Al-Awag, Wesam Salah Alaloul, M. S. Liew
Abstract:
Industrialized building system (IBS) is a method of construction with concentrated practices consisting of techniques, products, and a set of linked elements which operate collectively to accomplish objectives. The Industrialised Building System (IBS) has been recognised as a viable method for improving overall construction performance in terms of quality, cost, safety and health, waste reduction, and productivity. The Malaysian construction industry is considered one of the contributors to the development of the country. The acceptance level of IBS is still below government expectations. Thus, the Malaysian government has been continuously encouraging the industry to use and implement IBS. Conventional systems have several drawbacks, including project delays, low economic efficiency, excess inventory, and poor product quality. When it comes to implementing IBS, construction companies still face several obstacles and problems, notably in terms of contractual and procurement concerns, which leads to the low adoption of IBS in Malaysia. There are barriers to the acceptance of IBS technology, focused on awareness of historical failure and risks connected to IBS practices to provide enhanced performance. Therefore, the transformation from the existing conventional building systems to the industrialized building systems (IBS) is needed more than ever. The flexibility of IBS in Malaysia’s construction industry is very low due to numerous shortcomings and obstacles. Due to its environmental, economic, and social benefits, IBS could play a significant role in the Malaysian construction industry in the future. This paper concentrates on the potential role of IBS in sustainable construction practices in Malaysia. It also highlights the awareness, barriers, advantages, and disadvantages of IBS in the construction sector. The study concludes with recommendations for Malaysian construction stakeholders to encourage and increase the utilization of industrialised building systems.Keywords: construction industry, industrialized building system, barriers, advantages and disadvantages, construction, sustainability, Malaysia
Procedia PDF Downloads 10319744 Integer Programming: Domain Transformation in Nurse Scheduling Problem.
Authors: Geetha Baskaran, Andrzej Barjiela, Rong Qu
Abstract:
Motivation: Nurse scheduling is a complex combinatorial optimization problem. It is also known as NP-hard. It needs an efficient re-scheduling to minimize some trade-off of the measures of violation by reducing selected constraints to soft constraints with measurements of their violations. Problem Statement: In this paper, we extend our novel approach to solve the nurse scheduling problem by transforming it through Information Granulation. Approach: This approach satisfies the rules of a typical hospital environment based on a standard benchmark problem. Generating good work schedules has a great influence on nurses' working conditions which are strongly related to the level of a quality health care. Domain transformation that combines the strengths of operation research and artificial intelligence was proposed for the solution of the problem. Compared to conventional methods, our approach involves judicious grouping (information granulation) of shifts types’ that transforms the original problem into a smaller solution domain. Later these schedules from the smaller problem domain are converted back into the original problem domain by taking into account the constraints that could not be represented in the smaller domain. An Integer Programming (IP) package is used to solve the transformed scheduling problem by expending the branch and bound algorithm. We have used the GNU Octave for Windows to solve this problem. Results: The scheduling problem has been solved in the proposed formalism resulting in a high quality schedule. Conclusion: Domain transformation represents departure from a conventional one-shift-at-a-time scheduling approach. It offers an advantage of efficient and easily understandable solutions as well as offering deterministic reproducibility of the results. We note, however, that it does not guarantee the global optimum.Keywords: domain transformation, nurse scheduling, information granulation, artificial intelligence, simulation
Procedia PDF Downloads 39719743 Aging Time Effect of 58s Microstructure
Authors: Nattawipa Pakasri
Abstract:
58S (60SiO2-36CaO-4P2O5), three-dimensionally ordered macroporous bioactive glasses (3DOM-BGs) were synthesized by the sol-gel method using dual templating methods. non-ionic surfactant Brij56 used as templates component produced mesoporous and the spherical PMMA colloidal crystals as one template component yielded either three-dimensionally ordered microporous products or shaped bioactive glass nanoparticles. The bioactive glass with aging step for 12 h at room temperature, no structure transformation occurred and the 3DOM structure was produced (Figure a) due to no shrinkage process between the aging step. After 48 h time of o 3DOM structure remained and, nanocube with ∼120 nm edge lengths and nanosphere particle with ∼50 nm was obtained (Figure c, d). PMMA packing templates have octahedral and tetrahedral holes to make 2 final shapes of 3DOM-BGs which is rounded and cubic, respectively. The ageing time change from 12h, 24h and 48h affected to the thickness of interconnecting macropores network. The wall thickness was gradually decrease after increase aging time.Keywords: three-dimensionally ordered macroporous bioactive glasses, sol-gel method, PMMA, bioactive glass
Procedia PDF Downloads 11519742 Crooked Wood: Finding Potential in Local Hardwood
Authors: Livia Herle
Abstract:
A large part of the Principality of Liechtenstein is covered by forest. Three-quarters of this forest is defined as protective due to the alpine landscape of the country, which is deteriorating the quality of the wood. Nevertheless, the forest is one of the most important sources of raw material. However, out of the wood harvested annually in Liechtenstein, about two-thirds are used directly as an energy source, drastically shortening up the carbon storage cycle of wood. Furthermore, due to climate change, forest structures are changing. Predictions for the forest in Liechtenstein have stated that the spruce will mostly vanish in low altitudes, only being able to survive in the higher regions. In contrast, hardwood species will experience a rise, resulting in a more mixed forest. Thus, the main research focus will be put upon the potential of hardwood as well as prolonging the lifespan of a timber log before ending up as an energy source. An analysis of the local occurrence of hardwood species and their quality will serve as a tool to implement this knowledge upon constructional solutions. As a system that works with short spam timber and thus qualifies for the regional conditions of hardwood, reciprocal frame systems will be further investigated. These can be defined as load-bearing structures with only two beams connecting at a time, avoiding complex joining situations. Furthermore, every beam is mutually supporting. This allows the usage of short pieces of preferably massive wood. As a result, the system permits for an easy assembly but also disassembly. To promote a more circular application of wood, possible cascading scenarios of the structural solutions will be added. In a workshop at the School of Architecture of the University of Liechtenstein in the Sommer Semester 2024, prototypes in 1:1 of reciprocal frame systems using only local hardwood will help as a tool to further test the theoretical analyses.Keywords: hardwood, cascading wood, reciprocal frames, crooked wood, forest structures, climate change
Procedia PDF Downloads 7419741 Predicting Destination Station Based on Public Transit Passenger Profiling
Authors: Xuyang Song, Jun Yin
Abstract:
The smart card has been an extremely universal tool in public transit. It collects a large amount of data on buses, urban railway transit, and ferries and provides possibilities for passenger profiling. This paper combines offline analysis of passenger profiling and real-time prediction to propose a method that can accurately predict the destination station in real-time when passengers tag on. Firstly, this article constructs a static database of user travel characteristics after identifying passenger travel patterns based on the Density-Based Spatial Clustering of Applications with Noise (DBSCAN). The dual travel passenger habits are identified: OD travel habits and D station travel habits. Then a rapid real-time prediction algorithm based on Transit Passenger Profiling is proposed, which can predict the destination of in-board passengers. This article combines offline learning with online prediction, providing a technical foundation for real-time passenger flow prediction, monitoring and simulation, and short-term passenger behavior and demand prediction. This technology facilitates the efficient and real-time acquisition of passengers' travel destinations and demand. The last, an actual case was simulated and demonstrated feasibility and efficiency.Keywords: travel behavior, destination prediction, public transit, passenger profiling
Procedia PDF Downloads 1919740 Implementation of Conceptual Real-Time Embedded Functional Design via Drive-By-Wire ECU Development
Authors: Ananchai Ukaew, Choopong Chauypen
Abstract:
Design concepts of real-time embedded system can be realized initially by introducing novel design approaches. In this literature, model based design approach and in-the-loop testing were employed early in the conceptual and preliminary phase to formulate design requirements and perform quick real-time verification. The design and analysis methodology includes simulation analysis, model based testing, and in-the-loop testing. The design of conceptual drive-by-wire, or DBW, algorithm for electronic control unit, or ECU, was presented to demonstrate the conceptual design process, analysis, and functionality evaluation. The concepts of DBW ECU function can be implemented in the vehicle system to improve electric vehicle, or EV, conversion drivability. However, within a new development process, conceptual ECU functions and parameters are needed to be evaluated. As a result, the testing system was employed to support conceptual DBW ECU functions evaluation. For the current setup, the system components were consisted of actual DBW ECU hardware, electric vehicle models, and control area network or CAN protocol. The vehicle models and CAN bus interface were both implemented as real-time applications where ECU and CAN protocol functionality were verified according to the design requirements. The proposed system could potentially benefit in performing rapid real-time analysis of design parameters for conceptual system or software algorithm development.Keywords: drive-by-wire ECU, in-the-loop testing, model-based design, real-time embedded system
Procedia PDF Downloads 350