Search results for: ant colony optimization algorithm
1051 Efficient Frequent Itemset Mining Methods over Real-Time Spatial Big Data
Authors: Hamdi Sana, Emna Bouazizi, Sami Faiz
Abstract:
In recent years, there is a huge increase in the use of spatio-temporal applications where data and queries are continuously moving. As a result, the need to process real-time spatio-temporal data seems clear and real-time stream data management becomes a hot topic. Sliding window model and frequent itemset mining over dynamic data are the most important problems in the context of data mining. Thus, sliding window model for frequent itemset mining is a widely used model for data stream mining due to its emphasis on recent data and its bounded memory requirement. These methods use the traditional transaction-based sliding window model where the window size is based on a fixed number of transactions. Actually, this model supposes that all transactions have a constant rate which is not suited for real-time applications. And the use of this model in such applications endangers their performance. Based on these observations, this paper relaxes the notion of window size and proposes the use of a timestamp-based sliding window model. In our proposed frequent itemset mining algorithm, support conditions are used to differentiate frequents and infrequent patterns. Thereafter, a tree is developed to incrementally maintain the essential information. We evaluate our contribution. The preliminary results are quite promising.Keywords: real-time spatial big data, frequent itemset, transaction-based sliding window model, timestamp-based sliding window model, weighted frequent patterns, tree, stream query
Procedia PDF Downloads 1601050 Finding the Longest Common Subsequence in Normal DNA and Disease Affected Human DNA Using Self Organizing Map
Authors: G. Tamilpavai, C. Vishnuppriya
Abstract:
Bioinformatics is an active research area which combines biological matter as well as computer science research. The longest common subsequence (LCSS) is one of the major challenges in various bioinformatics applications. The computation of the LCSS plays a vital role in biomedicine and also it is an essential task in DNA sequence analysis in genetics. It includes wide range of disease diagnosing steps. The objective of this proposed system is to find the longest common subsequence which presents in a normal and various disease affected human DNA sequence using Self Organizing Map (SOM) and LCSS. The human DNA sequence is collected from National Center for Biotechnology Information (NCBI) database. Initially, the human DNA sequence is separated as k-mer using k-mer separation rule. Mean and median values are calculated from each separated k-mer. These calculated values are fed as input to the Self Organizing Map for the purpose of clustering. Then obtained clusters are given to the Longest Common Sub Sequence (LCSS) algorithm for finding common subsequence which presents in every clusters. It returns nx(n-1)/2 subsequence for each cluster where n is number of k-mer in a specific cluster. Experimental outcomes of this proposed system produce the possible number of longest common subsequence of normal and disease affected DNA data. Thus the proposed system will be a good initiative aid for finding disease causing sequence. Finally, performance analysis is carried out for different DNA sequences. The obtained values show that the retrieval of LCSS is done in a shorter time than the existing system.Keywords: clustering, k-mers, longest common subsequence, SOM
Procedia PDF Downloads 2651049 Gap Formation into Bulk InSb Crystals Grown by the VDS Technique Revealing Enhancement in the Transport Properties
Authors: Dattatray Gadkari, Dilip Maske, Manisha Joshi, Rashmi Choudhari, Brij Mohan Arora
Abstract:
The vertical directional solidification (VDS) technique has been applied to the growth of bulk InSb crystals. The concept of practical stability is applied to the case of detached bulk crystal growth on earth in a simplified design. By optimization of the set up and growth parameters, 32 ingots of 65-75 mm in length and 10-22 mm in diameter have been grown. The results indicate that the wetting angle of the melt on the ampoule wall and the pressure difference across the interface are the crucial factors effecting the meniscus shape and stability. Taking into account both heat transfer and capillarity, it is demonstrated that the process is stable in case of convex menisci (seen from melt), provided that pressure fluctuations remain in a stable range. During the crystal growth process, it is necessary to keep a relationship between the rate of the difference pressure controls and the solidification to maintain the width of gas gap. It is concluded that practical stability gives valuable knowledge of the dynamics and could be usefully applied to other crystal growth processes, especially those involving capillary shaping. Optoelectronic properties were investigated in relation to the type of solidification attached and detached ingots growth. These samples, room temperature physical properties such as Hall mobility, FTIR, Raman spectroscopy and microhardness achieved for antimonide samples grown by VDS technique have shown the highest values gained till at this time. These results reveal that these crystals can be used to produce InSb with high mobility for device applications.Keywords: alloys, electronic materials, semiconductors, crystal growth, solidification, etching, optical microscopy, crystal structure, defects, Hall effect
Procedia PDF Downloads 4161048 Fabrication and Characterization of Ceramic Matrix Composite
Authors: Yahya Asanoglu, Celaletdin Ergun
Abstract:
Ceramic-matrix composites (CMC) have significant prominence in various engineering applications because of their heat resistance associated with an ability to withstand the brittle type of catastrophic failure. In this study, specific raw materials have been chosen for the purpose of having suitable CMC material for high-temperature dielectric applications. CMC material will be manufactured through the polymer infiltration and pyrolysis (PIP) method. During the manufacturing process, vacuum infiltration and autoclave will be applied so as to decrease porosity and obtain higher mechanical properties, although this advantage leads to a decrease in the electrical performance of the material. Time and temperature adjustment in pyrolysis parameters provide a significant difference in the properties of the resulting material. The mechanical and thermal properties will be investigated in addition to the measurement of dielectric constant and tangent loss values within the spectrum of Ku-band (12 to 18 GHz). Also, XRD, TGA/PTA analyses will be employed to prove the transition of precursor to ceramic phases and to detect critical transition temperatures. Additionally, SEM analysis on the fracture surfaces will be performed to see failure mechanism whether there is fiber pull-out, crack deflection and others which lead to ductility and toughness in the material. In this research, the cost-effectiveness and applicability of the PIP method will be proven in the manufacture of CMC materials while optimization of pyrolysis time, temperature and cycle for specific materials is detected by experiment. Also, several resins will be shown to be a potential raw material for CMC radome and antenna applications. This research will be distinguished from previous related papers due to the fact that in this research, the combination of different precursors and fabrics will be experimented with to specify the unique cons and pros of each combination. In this way, this is an experimental sum of previous works with unique PIP parameters and a guide to the manufacture of CMC radome and antenna.Keywords: CMC, PIP, precursor, quartz
Procedia PDF Downloads 1591047 Numerical Studies on Thrust Vectoring Using Shock-Induced Self Impinging Secondary Jets
Authors: S. Vignesh, N. Vishnu, S. Vigneshwaran, M. Vishnu Anand, Dinesh Kumar Babu, V. R. Sanal Kumar
Abstract:
The study of the primary flow velocity and the self impinging secondary jet flow mixing is important from both the fundamental research and the application point of view. Real industrial configurations are more complex than simple shear layers present in idealized numerical thrust-vectoring models due to the presence of combustion, swirl and confinement. Predicting the flow features of self impinging secondary jets in a supersonic primary flow is complex owing to the fact that there are a large number of parameters involved. Earlier studies have been highlighted several key features of self impinging jets, but an extensive characterization in terms of jet interaction between supersonic flow and self impinging secondary sonic jets is still an active research topic. In this paper numerical studies have been carried out using a validated two-dimensional k-omega standard turbulence model for the design optimization of a thrust vector control system using shock induced self impinging secondary flow sonic jets using non-reacting flows. Efforts have been taken for examining the flow features of TVC system with various secondary jets at different divergent locations and jet impinging angles with the same inlet jet pressure and mass flow ratio. The results from the parametric studies reveal that in addition to the primary to the secondary mass flow ratio the characteristics of the self impinging secondary jets having bearing on an efficient thrust vectoring. We concluded that the self impinging secondary jet nozzles are better than single jet nozzle with the same secondary mass flow rate owing to the fact fixing of the self impinging secondary jet nozzles with proper jet angle could facilitate better thrust vectoring for any supersonic aerospace vehicle.Keywords: fluidic thrust vectoring, rocket steering, supersonic to sonic jet interaction, TVC in aerospace vehicles
Procedia PDF Downloads 5871046 Hydraulic Optimization of an Adjustable Spiral-Shaped Evaporator
Authors: Matthias Feiner, Francisco Javier Fernández García, Michael Arneman, Martin Kipfmüller
Abstract:
To ensure reliability in miniaturized devices or processes with increased heat fluxes, very efficient cooling methods have to be employed in order to cope with small available cooling surfaces. To address this problem, a certain type of evaporator/heat exchanger was developed: It is called a swirl evaporator due to its flow characteristic. The swirl evaporator consists of a concentrically eroded screw geometry in which a capillary tube is guided, which is inserted into a pocket hole in components with high heat load. The liquid refrigerant R32 is sprayed through the capillary tube to the end face of the blind hole and is sucked off against the injection direction in the screw geometry. Its inner diameter is between one and three millimeters. The refrigerant is sprayed into the pocket hole via a small tube aligned in the center of the bore hole and is sucked off on the front side of the hole against the direction of injection. The refrigerant is sucked off in a helical geometry (twisted flow) so that it is accelerated against the hot wall (centrifugal acceleration). This results in an increase in the critical heat flux of up to 40%. In this way, more heat can be dissipated on the same surface/available installation space. This enables a wide range of technical applications. To optimize the design for the needs in various fields of industry, like the internal tool cooling when machining nickel base alloys like Inconel 718, a correlation-based model of the swirl-evaporator was developed. The model is separated into 3 subgroups with overall 5 regimes. The pressure drop and heat transfer are calculated separately. An approach to determine the locality of phase change in the capillary and the swirl was implemented. A test stand has been developed to verify the simulation.Keywords: helically-shaped, oil-free, R-32, swirl-evaporator, twist-flow
Procedia PDF Downloads 1071045 Design and Fabrication of Pulse Detonation Engine Based on Numerical Simulation
Authors: Vishal Shetty, Pranjal Khasnis, Saptarshi Mandal
Abstract:
This work explores the design and fabrication of a fundamental pulse detonation engine (PDE) prototype on the basis of pressure and temperature pulse obtained from numerical simulation of the same. PDE is an advanced propulsion system that utilizes detonation waves for thrust generation. PDEs use a fuel-air mixture ignited to create a supersonic detonation wave, resulting in rapid energy release, high pressures, and high temperatures. The operational cycle includes fuel injection, ignition, detonation, exhaust of combustion products, and purging of the chamber for the next cycle. This work presents details of the core operating principles of a PDE, highlighting its potential advantages over traditional jet engines that rely on continuous combustion. The design focuses on a straightforward, valve-controlled system for fuel and oxidizer injection into a detonation tube. The detonation was initiated using an electronically controlled spark plug or similar high-energy ignition source. Following the detonation, a purge valve was employed to expel the combusted gases and prepare the tube for the next cycle. Key considerations for the design include material selection for the detonation tube to withstand the high temperatures and pressures generated during detonation. Fabrication techniques prioritized readily available machining methods to create a functional prototype. This work detailed the testing procedures for verifying the functionality of the PDE prototype. Emphasis was given to the measurement of thrust generation and capturing of pressure data within the detonation tube. The numerical analysis presents performance evaluation and potential areas for future design optimization.Keywords: pulse detonation engine, ignition, detonation, combustion
Procedia PDF Downloads 181044 Layer-by-Layer Modified Ceramic Membranes for Micropollutant Removal
Authors: Jenny Radeva, Anke-Gundula Roth, Christian Goebbert, Robert Niestroj-Pahl, Lars Daehne, Axel Wolfram, Juergen Wiese
Abstract:
Ceramic membranes for water purification combine excellent stability with long-life characteristics and high chemical resistance. Layer-by-Layer coating is a well-known technique for customization and optimization of filtration properties of membranes but is mostly used on polymeric membranes. Ceramic membranes comprising a metal oxide filtration layer of Al2O3 or TiO2 are charged and therefore highly suitable for polyelectrolyte adsorption. The high stability of the membrane support allows efficient backwash and chemical cleaning of the membrane. The presented study reports metal oxide/organic composite membrane with an increased rejection of bivalent salts like MgSO4 and the organic micropollutant Diclofenac. A self-build apparatus was used for applying the polyelectrolyte multilayers on the ceramic membrane. The device controls the flow and timing of the polyelectrolytes and washing solutions. As support for the Layer-by-Layer coat, ceramic mono-channel membranes were used with an inner capillary of 8 mm diameter, which is connected to the coating device. The inner wall of the capillary is coated subsequently with polycat- and anions. The filtration experiments were performed with a feed solution of MgSO4 and Diclofenac. The salt content of the permeate was detected conductometrically and Diclofenac was measured with UV-Adsorption. The concluded results show retention values of magnesium sulfate of 70% and diclofenac retention of 60%. Further experimental research studied various parameters of the composite membrane-like Molecular Weight Cut Off and pore size, Zeta potential and its mechanical and chemical robustness.Keywords: water purification, polyelectrolytes, membrane modification, layer-by-layer coating, ceramic membranes
Procedia PDF Downloads 2431043 Prospective Validation of the FibroTest Score in Assessing Liver Fibrosis in Hepatitis C Infection with Genotype 4
Authors: G. Shiha, S. Seif, W. Samir, K. Zalata
Abstract:
Prospective Validation of the FibroTest Score in assessing Liver Fibrosis in Hepatitis C Infection with Genotype 4 FibroTest (FT) is non-invasive score of liver fibrosis that combines the quantitative results of 5 serum biochemical markers (alpha-2-macroglobulin, haptoglobin, apolipoprotein A1, gamma glutamyl transpeptidase (GGT) and bilirubin) and adjusted with the patient's age and sex in a patented algorithm to generate a measure of fibrosis. FT has been validated in patients with chronic hepatitis C (CHC) (Halfon et al., Gastroenterol. Clin Biol.( 2008), 32 6suppl 1, 22-39). The validation of fibro test ( FT) in genotype IV is not well studied. Our aim was to evaluate the performance of FibroTest in an independent prospective cohort of hepatitis C patients with genotype 4. Subject was 122 patients with CHC. All liver biopsies were scored using METAVIR system. Our fibrosis score(FT) were measured, and the performance of the cut-off score were done using ROC curve. Among patients with advanced fibrosis, the FT was identically matched with the liver biopsy in 18.6%, overestimated the stage of fibrosis in 44.2% and underestimated the stage of fibrosis in 37.7% of cases. Also in patients with no/mild fibrosis, identical matching was detected in 39.2% of cases with overestimation in 48.1% and underestimation in 12.7%. So, the overall results of the test were identical matching, overestimation and underestimation in 32%, 46.7% and 21.3% respectively. Using ROC curve it was found that (FT) at the cut-off point of 0.555 could discriminate early from advanced stages of fibrosis with an area under ROC curve (AUC) of 0.72, sensitivity of 65%, specificity of 69%, PPV of 68%, NPV of 66% and accuracy of 67%. As FibroTest Score overestimates the stage of advanced fibrosis, it should not be considered as a reliable surrogate for liver biopsy in hepatitis C infection with genotype 4.Keywords: fibrotest, chronic Hepatitis C, genotype 4, liver biopsy
Procedia PDF Downloads 4131042 Improving Pneumatic Artificial Muscle Performance Using Surrogate Model: Roles of Operating Pressure and Tube Diameter
Authors: Van-Thanh Ho, Jaiyoung Ryu
Abstract:
In soft robotics, the optimization of fluid dynamics through pneumatic methods plays a pivotal role in enhancing operational efficiency and reducing energy loss. This is particularly crucial when replacing conventional techniques such as cable-driven electromechanical systems. The pneumatic model employed in this study represents a sophisticated framework designed to efficiently channel pressure from a high-pressure reservoir to various muscle locations on the robot's body. This intricate network involves a branching system of tubes. The study introduces a comprehensive pneumatic model, encompassing the components of a reservoir, tubes, and Pneumatically Actuated Muscles (PAM). The development of this model is rooted in the principles of shock tube theory. Notably, the study leverages experimental data to enhance the understanding of the interplay between the PAM structure and the surrounding fluid. This improved interactive approach involves the use of morphing motion, guided by a contraction function. The study's findings demonstrate a high degree of accuracy in predicting pressure distribution within the PAM. The model's predictive capabilities ensure that the error in comparison to experimental data remains below a threshold of 10%. Additionally, the research employs a machine learning model, specifically a surrogate model based on the Kriging method, to assess and quantify uncertainty factors related to the initial reservoir pressure and tube diameter. This comprehensive approach enhances our understanding of pneumatic soft robotics and its potential for improved operational efficiency.Keywords: pneumatic artificial muscles, pressure drop, morhing motion, branched network, surrogate model
Procedia PDF Downloads 971041 High Sensitivity Crack Detection and Locating with Optimized Spatial Wavelet Analysis
Authors: A. Ghanbari Mardasi, N. Wu, C. Wu
Abstract:
In this study, a spatial wavelet-based crack localization technique for a thick beam is presented. Wavelet scale in spatial wavelet transformation is optimized to enhance crack detection sensitivity. A windowing function is also employed to erase the edge effect of the wavelet transformation, which enables the method to detect and localize cracks near the beam/measurement boundaries. Theoretical model and vibration analysis considering the crack effect are first proposed and performed in MATLAB based on the Timoshenko beam model. Gabor wavelet family is applied to the beam vibration mode shapes derived from the theoretical beam model to magnify the crack effect so as to locate the crack. Relative wavelet coefficient is obtained for sensitivity analysis by comparing the coefficient values at different positions of the beam with the lowest value in the intact area of the beam. Afterward, the optimal wavelet scale corresponding to the highest relative wavelet coefficient at the crack position is obtained for each vibration mode, through numerical simulations. The same procedure is performed for cracks with different sizes and positions in order to find the optimal scale range for the Gabor wavelet family. Finally, Hanning window is applied to different vibration mode shapes in order to overcome the edge effect problem of wavelet transformation and its effect on the localization of crack close to the measurement boundaries. Comparison of the wavelet coefficients distribution of windowed and initial mode shapes demonstrates that window function eases the identification of the cracks close to the boundaries.Keywords: edge effect, scale optimization, small crack locating, spatial wavelet
Procedia PDF Downloads 3561040 Hub Traveler Guidance Signage Evaluation via Panoramic Visualization Using Entropy Weight Method and TOPSIS
Authors: Si-yang Zhang, Chi Zhao
Abstract:
Comprehensive transportation hubs are important nodes of the transportation network, and their internal signage the functions as guidance and distribution assistance, which directly affects the operational efficiency of traffic in and around the hubs. Reasonably installed signage effectively attracts the visual focus of travelers and improves wayfinding efficiency. Among the elements of signage, the visual guidance effect is the key factor affecting the information conveyance, whom should be evaluated during design and optimization process. However, existing evaluation methods mostly focus on the layout, and are not able to fully understand if signage caters travelers’ need. This study conducted field investigations and developed panoramic videos for multiple transportation hubs in China, and designed survey accordingly. Human subjects are recruited to watch panoramic videos via virtual reality (VR) and respond to the surveys. In this paper, Pudong Airport and Xi'an North Railway Station were studied and compared as examples due to their high traveler volume and relatively well-developed traveler service systems. Visual attention was captured by eye tracker and subjective satisfaction ratings were collected through surveys. Entropy Weight Method (EWM) was utilized to evaluate the effectiveness of signage elements and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) was used to further rank the importance of the elements. The results show that the degree of visual attention of travelers significantly affects the evaluation results of guidance signage. Key factors affecting visual attention include accurate legibility, obstruction and defacement rates, informativeness, and whether signage is set up in a hierarchical manner.Keywords: traveler guidance signage, panoramic video, visual attention, entropy weight method, TOPSIS
Procedia PDF Downloads 671039 Health Trajectory Clustering Using Deep Belief Networks
Authors: Farshid Hajati, Federico Girosi, Shima Ghassempour
Abstract:
We present a Deep Belief Network (DBN) method for clustering health trajectories. Deep Belief Network (DBN) is a deep architecture that consists of a stack of Restricted Boltzmann Machines (RBM). In a deep architecture, each layer learns more complex features than the past layers. The proposed method depends on DBN in clustering without using back propagation learning algorithm. The proposed DBN has a better a performance compared to the deep neural network due the initialization of the connecting weights. We use Contrastive Divergence (CD) method for training the RBMs which increases the performance of the network. The performance of the proposed method is evaluated extensively on the Health and Retirement Study (HRS) database. The University of Michigan Health and Retirement Study (HRS) is a nationally representative longitudinal study that has surveyed more than 27,000 elderly and near-elderly Americans since its inception in 1992. Participants are interviewed every two years and they collect data on physical and mental health, insurance coverage, financial status, family support systems, labor market status, and retirement planning. The dataset is publicly available and we use the RAND HRS version L, which is easy to use and cleaned up version of the data. The size of sample data set is 268 and the length of the trajectories is equal to 10. The trajectories do not stop when the patient dies and represent 10 different interviews of live patients. Compared to the state-of-the-art benchmarks, the experimental results show the effectiveness and superiority of the proposed method in clustering health trajectories.Keywords: health trajectory, clustering, deep learning, DBN
Procedia PDF Downloads 3681038 Optimisation of Metrological Inspection of a Developmental Aeroengine Disc
Authors: Suneel Kumar, Nanda Kumar J. Sreelal Sreedhar, Suchibrata Sen, V. Muralidharan,
Abstract:
Fan technology is very critical and crucial for any aero engine technology. The fan disc forms a critical part of the fan module. It is an airworthiness requirement to have a metrological qualified quality disc. The current study uses a tactile probing and scanning on an articulated measuring machine (AMM), a bridge type coordinate measuring machine (CMM) and Metrology software for intermediate and final dimensional and geometrical verification during the prototype development of the disc manufactured through forging and machining process. The circumferential dovetails manufactured through the milling process are evaluated based on the evaluated and analysed metrological process. To perform metrological optimization a change of philosophy is needed making quality measurements available as fast as possible to improve process knowledge and accelerate the process but with accuracy, precise and traceable measurements. The offline CMM programming for inspection and optimisation of the CMM inspection plan are crucial portions of the study and discussed. The dimensional measurement plan as per the ASME B 89.7.2 standard to reach an optimised CMM measurement plan and strategy are an important requirement. The probing strategy, stylus configuration, and approximation strategy effects on the measurements of circumferential dovetail measurements of the developmental prototype disc are discussed. The results were discussed in the form of enhancement of the R &R (repeatability and reproducibility) values with uncertainty levels within the desired limits. The findings from the measurement strategy adopted for disc dovetail evaluation and inspection time optimisation are discussed with the help of various analyses and graphical outputs obtained from the verification process.Keywords: coordinate measuring machine, CMM, aero engine, articulated measuring machine, fan disc
Procedia PDF Downloads 1061037 Numerical Investigation of Entropy Signatures in Fluid Turbulence: Poisson Equation for Pressure Transformation from Navier-Stokes Equation
Authors: Samuel Ahamefula Mba
Abstract:
Fluid turbulence is a complex and nonlinear phenomenon that occurs in various natural and industrial processes. Understanding turbulence remains a challenging task due to its intricate nature. One approach to gain insights into turbulence is through the study of entropy, which quantifies the disorder or randomness of a system. This research presents a numerical investigation of entropy signatures in fluid turbulence. The work is to develop a numerical framework to describe and analyse fluid turbulence in terms of entropy. This decomposes the turbulent flow field into different scales, ranging from large energy-containing eddies to small dissipative structures, thus establishing a correlation between entropy and other turbulence statistics. This entropy-based framework provides a powerful tool for understanding the underlying mechanisms driving turbulence and its impact on various phenomena. This work necessitates the derivation of the Poisson equation for pressure transformation of Navier-Stokes equation and using Chebyshev-Finite Difference techniques to effectively resolve it. To carry out the mathematical analysis, consider bounded domains with smooth solutions and non-periodic boundary conditions. To address this, a hybrid computational approach combining direct numerical simulation (DNS) and Large Eddy Simulation with Wall Models (LES-WM) is utilized to perform extensive simulations of turbulent flows. The potential impact ranges from industrial process optimization and improved prediction of weather patterns.Keywords: turbulence, Navier-Stokes equation, Poisson pressure equation, numerical investigation, Chebyshev-finite difference, hybrid computational approach, large Eddy simulation with wall models, direct numerical simulation
Procedia PDF Downloads 921036 Comparison of Two Anesthetic Methods during Interventional Neuroradiology Procedure: Propofol versus Sevoflurane Using Patient State Index
Authors: Ki Hwa Lee, Eunsu Kang, Jae Hong Park
Abstract:
Background: Interventional neuroradiology (INR) has been a rapidly growing and evolving neurosurgical part during the past few decades. Sevoflurane and propofol are both suitable anesthetics for INR procedure. Monitoring of depth of anesthesia is being used very widely. SEDLine™ monitor, a 4-channel processed EEG monitor, uses a proprietary algorithm to analyze the raw EEG signal and displays the Patient State Index (PSI) values. There are only a fewer studies examining the PSI in the neuro-anesthesia. We aimed to investigate the difference of PSI values and hemodynamic variables between sevoflurane and propofol anesthesia during INR procedure. Methods: We reviewed the medical records of patients who scheduled to undergo embolization of non-ruptured intracranial aneurysm by a single operator from May 2013 to December 2014, retrospectively. Sixty-five patients were categorized into two groups; sevoflurane (n = 33) vs propofol (n = 32) group. The PSI values, hemodynamic variables, and the use of hemodynamic drugs were analyzed. Results: Significant differences were seen between PSI values obtained during different perioperative stages in both two groups (P < 0.0001). The PSI values of propofol group were lower than that of sevoflurane group during INR procedure (P < 0.01). The patients in propofol group had more prolonged time of extubation and more phenylephrine requirement than sevoflurane group (p < 0.05). Anti-hypertensive drug was more administered to the patients during extubation in sevoflurane group (p < 0.05). Conclusions: The PSI can detect depth of anesthesia and changes of concentration of anesthetics during INR procedure. Extubation was faster in sevoflurane group, but smooth recovery was shown in propofol group.Keywords: interventional neuroradiology, patient state index, propofol, sevoflurane
Procedia PDF Downloads 1791035 Research on Old Community Planning Strategy in Mountainous City from The Perspective of Physical Activity: A Case Study of Daxigou Street Community, Chongqing
Authors: Yang Liandong
Abstract:
The rapid development of cities has triggered a series of urban health problems. Residents' daily lives have generally changed to long-term unhealthy work and rest, and the prevalence of chronic diseases in the population is on the rise. Promoting physical activity is an effective way to enhance the population's health and reduce the risk of various chronic diseases. As the most basic unit of the city, the community is the living space where residents use the highest frequency of daily activities and also the best space carrier for people to carry out all kinds of physical activities, and its planning research is of great significance for promoting physical activities. Under special conditions, the old communities in mountainous cities present compact and three-dimensional spatial characteristics, and there are problems such as disordered spatial organization, scattered distribution, and low utilization rates. This paper selects four communities in Daxigou Street, Yuzhong District, Chongqing as the research object, analyzes the current situation of the research cases through literature combing and field investigation and interviews, and puts forward the planning strategies for promoting physical activity in old communities in mountain cities from four aspects: building a convenient and smooth public space system, creating a diversified and shared activity space, creating a beautiful and healing community landscape, and providing convenient and perfect supporting facilities, to provide a certain reference for the healthy development of old communities in mountain cities.Keywords: physical activity, community planning, old communities in mountain cities, public space optimization, spatial fairness
Procedia PDF Downloads 251034 Modelling Biological Treatment of Dye Wastewater in SBR Systems Inoculated with Bacteria by Artificial Neural Network
Authors: Yasaman Sanayei, Alireza Bahiraie
Abstract:
This paper presents a systematic methodology based on the application of artificial neural networks for sequencing batch reactor (SBR). The SBR is a fill-and-draw biological wastewater technology, which is specially suited for nutrient removal. Employing reactive dye by Sphingomonas paucimobilis bacteria at sequence batch reactor is a novel approach of dye removal. The influent COD, MLVSS, and reaction time were selected as the process inputs and the effluent COD and BOD as the process outputs. The best possible result for the discrete pole parameter was a= 0.44. In orderto adjust the parameters of ANN, the Levenberg-Marquardt (LM) algorithm was employed. The results predicted by the model were compared to the experimental data and showed a high correlation with R2> 0.99 and a low mean absolute error (MAE). The results from this study reveal that the developed model is accurate and efficacious in predicting COD and BOD parameters of the dye-containing wastewater treated by SBR. The proposed modeling approach can be applied to other industrial wastewater treatment systems to predict effluent characteristics. Note that SBR are normally operated with constant predefined duration of the stages, thus, resulting in low efficient operation. Data obtained from the on-line electronic sensors installed in the SBR and from the control quality laboratory analysis have been used to develop the optimal architecture of two different ANN. The results have shown that the developed models can be used as efficient and cost-effective predictive tools for the system analysed.Keywords: artificial neural network, COD removal, SBR, Sphingomonas paucimobilis
Procedia PDF Downloads 4121033 Combat Capability Improvement Using Sleep Analysis
Authors: Gabriela Kloudova, Miloslav Stehlik, Peter Sos
Abstract:
The quality of sleep can affect combat performance where the vigilance, accuracy and reaction time are a decisive factor. In the present study, airborne and special units are measured on duty using actigraphy fingerprint scoring algorithm and QEEG (quantitative EEG). Actigraphic variables of interest will be: mean nightly sleep duration, mean napping duration, mean 24-h sleep duration, mean sleep latency, mean sleep maintenance efficiency, mean sleep fragmentation index, mean sleep onset time, mean sleep offset time and mean midpoint time. In an attempt to determine the individual somnotype of each subject, the data like sleep pattern, chronotype (morning and evening lateness), biological need for sleep (daytime and anytime sleepability) and trototype (daytime and anytime wakeability) will be extracted. Subsequently, a series of recommendations will be included in the training plan based on daily routine, timing of the day and night activities, duration of sleep and the number of sleeping blocks in a defined time. The aim of these modifications in the training plan is to reduce day-time sleepiness, improve vigilance, attention, accuracy, speed of the conducted tasks and to optimize energy supplies. Regular improvement of the training supposed to have long-term neurobiological consequences including neuronal activity changes measured by QEEG. Subsequently, that should enhance cognitive functioning in subjects assessed by the digital cognitive test batteries and improve their overall performance.Keywords: sleep quality, combat performance, actigraph, somnotype
Procedia PDF Downloads 1641032 Efficient Credit Card Fraud Detection Based on Multiple ML Algorithms
Authors: Neha Ahirwar
Abstract:
In the contemporary digital era, the rise of credit card fraud poses a significant threat to both financial institutions and consumers. As fraudulent activities become more sophisticated, there is an escalating demand for robust and effective fraud detection mechanisms. Advanced machine learning algorithms have become crucial tools in addressing this challenge. This paper conducts a thorough examination of the design and evaluation of a credit card fraud detection system, utilizing four prominent machine learning algorithms: random forest, logistic regression, decision tree, and XGBoost. The surge in digital transactions has opened avenues for fraudsters to exploit vulnerabilities within payment systems. Consequently, there is an urgent need for proactive and adaptable fraud detection systems. This study addresses this imperative by exploring the efficacy of machine learning algorithms in identifying fraudulent credit card transactions. The selection of random forest, logistic regression, decision tree, and XGBoost for scrutiny in this study is based on their documented effectiveness in diverse domains, particularly in credit card fraud detection. These algorithms are renowned for their capability to model intricate patterns and provide accurate predictions. Each algorithm is implemented and evaluated for its performance in a controlled environment, utilizing a diverse dataset comprising both genuine and fraudulent credit card transactions.Keywords: efficient credit card fraud detection, random forest, logistic regression, XGBoost, decision tree
Procedia PDF Downloads 641031 Optimization of Assembly and Welding of Complex 3D Structures on the Base of Modeling with Use of Finite Elements Method
Authors: M. N. Zelenin, V. S. Mikhailov, R. P. Zhivotovsky
Abstract:
It is known that residual welding deformations give negative effect to processability and operational quality of welded structures, complicating their assembly and reducing strength. Therefore, selection of optimal technology, ensuring minimum welding deformations, is one of the main goals in developing a technology for manufacturing of welded structures. Through years, JSC SSTC has been developing a theory for estimation of welding deformations and practical activities for reducing and compensating such deformations during welding process. During long time a methodology was used, based on analytic dependence. This methodology allowed defining volumetric changes of metal due to welding heating and subsequent cooling. However, dependences for definition of structures deformations, arising as a result of volumetric changes of metal in the weld area, allowed performing calculations only for simple structures, such as units, flat sections and sections with small curvature. In case of complex 3D structures, estimations on the base of analytic dependences gave significant errors. To eliminate this shortage, it was suggested to use finite elements method for resolving of deformation problem. Here, one shall first calculate volumes of longitudinal and transversal shortenings of welding joints using method of analytic dependences and further, with obtained shortenings, calculate forces, which action is equivalent to the action of active welding stresses. Further, a finite-elements model of the structure is developed and equivalent forces are added to this model. Having results of calculations, an optimal sequence of assembly and welding is selected and special measures to reduce and compensate welding deformations are developed and taken.Keywords: residual welding deformations, longitudinal and transverse shortenings of welding joints, method of analytic dependences, finite elements method
Procedia PDF Downloads 4091030 Characteristic Sentence Stems in Academic English Texts: Definition, Identification, and Extraction
Authors: Jingjie Li, Wenjie Hu
Abstract:
Phraseological units in academic English texts have been a central focus in recent corpus linguistic research. A wide variety of phraseological units have been explored, including collocations, chunks, lexical bundles, patterns, semantic sequences, etc. This paper describes a special category of clause-level phraseological units, namely, Characteristic Sentence Stems (CSSs), with a view to describing their defining criteria and extraction method. CSSs are contiguous lexico-grammatical sequences which contain a subject-predicate structure and which are frame expressions characteristic of academic writing. The extraction of CSSs consists of six steps: Part-of-speech tagging, n-gram segmentation, structure identification, significance of occurrence calculation, text range calculation, and overlapping sequence reduction. Significance of occurrence calculation is the crux of this study. It includes the computing of both the internal association and the boundary independence of a CSS and tests the occurring significance of the CSS from both inside and outside perspectives. A new normalization algorithm is also introduced into the calculation of LocalMaxs for reducing overlapping sequences. It is argued that many sentence stems are so recurrent in academic texts that the most typical of them have become the habitual ways of making meaning in academic writing. Therefore, studies of CSSs could have potential implications and reference value for academic discourse analysis, English for Academic Purposes (EAP) teaching and writing.Keywords: characteristic sentence stem, extraction method, phraseological unit, the statistical measure
Procedia PDF Downloads 1661029 A Prediction of Cutting Forces Using Extended Kienzle Force Model Incorporating Tool Flank Wear Progression
Authors: Wu Peng, Anders Liljerehn, Martin Magnevall
Abstract:
In metal cutting, tool wear gradually changes the micro geometry of the cutting edge. Today there is a significant gap in understanding the impact these geometrical changes have on the cutting forces which governs tool deflection and heat generation in the cutting zone. Accurate models and understanding of the interaction between the work piece and cutting tool leads to improved accuracy in simulation of the cutting process. These simulations are useful in several application areas, e.g., optimization of insert geometry and machine tool monitoring. This study aims to develop an extended Kienzle force model to account for the effect of rake angle variations and tool flank wear have on the cutting forces. In this paper, the starting point sets from cutting force measurements using orthogonal turning tests of pre-machined flanches with well-defined width, using triangular coated inserts to assure orthogonal condition. The cutting forces have been measured by dynamometer with a set of three different rake angles, and wear progression have been monitored during machining by an optical measuring collaborative robot. The method utilizes the measured cutting forces with the inserts flank wear progression to extend the mechanistic cutting forces model with flank wear as an input parameter. The adapted cutting forces model is validated in a turning process with commercial cutting tools. This adapted cutting forces model shows the significant capability of prediction of cutting forces accounting for tools flank wear and different-rake-angle cutting tool inserts. The result of this study suggests that the nonlinear effect of tools flank wear and interaction between the work piece and the cutting tool can be considered by the developed cutting forces model.Keywords: cutting force, kienzle model, predictive model, tool flank wear
Procedia PDF Downloads 1071028 Techno-Economic Optimization and Evaluation of an Integrated Industrial Scale NMC811 Cathode Active Material Manufacturing Process
Authors: Usama Mohamed, Sam Booth, Aliysn J. Nedoma
Abstract:
As part of the transition to electric vehicles, there has been a recent increase in demand for battery manufacturing. Cathodes typically account for approximately 50% of the total lithium-ion battery cell cost and are a pivotal factor in determining the viability of new industrial infrastructure. Cathodes which offer lower costs whilst maintaining or increasing performance, such as nickel-rich layered cathodes, have a significant competitive advantage when scaling up the manufacturing process. This project evaluates the techno-economic value proposition of an integrated industrial scale cathode active material (CAM) production process, closing the mass and energy balances, and optimizing the operation conditions using a sensitivity analysis. This is done by developing a process model of a co-precipitation synthesis route using Aspen Plus software and validated based on experimental data. The mechanism chemistry and equilibrium conditions were established based on previous literature and HSC-Chemistry software. This is then followed by integrating the energy streams, adding waste recovery and treatment processes, as well as testing the effect of key parameters (temperature, pH, reaction time, etc.) on CAM production yield and emissions. Finally, an economic analysis estimating the fixed and variable costs (including capital expenditure, labor costs, raw materials, etc.) to calculate the cost of CAM ($/kg and $/kWh), total plant cost ($) and net present value (NPV). This work sets the foundational blueprint for future research into sustainable industrial scale processes for CAM manufacturing.Keywords: cathodes, industrial production, nickel-rich layered cathodes, process modelling, techno-economic analysis
Procedia PDF Downloads 991027 3D Object Retrieval Based on Similarity Calculation in 3D Computer Aided Design Systems
Authors: Ahmed Fradi
Abstract:
Nowadays, recent technological advances in the acquisition, modeling, and processing of three-dimensional (3D) objects data lead to the creation of models stored in huge databases, which are used in various domains such as computer vision, augmented reality, game industry, medicine, CAD (Computer-aided design), 3D printing etc. On the other hand, the industry is currently benefiting from powerful modeling tools enabling designers to easily and quickly produce 3D models. The great ease of acquisition and modeling of 3D objects make possible to create large 3D models databases, then, it becomes difficult to navigate them. Therefore, the indexing of 3D objects appears as a necessary and promising solution to manage this type of data, to extract model information, retrieve an existing model or calculate similarity between 3D objects. The objective of the proposed research is to develop a framework allowing easy and fast access to 3D objects in a CAD models database with specific indexing algorithm to find objects similar to a reference model. Our main objectives are to study existing methods of similarity calculation of 3D objects (essentially shape-based methods) by specifying the characteristics of each method as well as the difference between them, and then we will propose a new approach for indexing and comparing 3D models, which is suitable for our case study and which is based on some previously studied methods. Our proposed approach is finally illustrated by an implementation, and evaluated in a professional context.Keywords: CAD, 3D object retrieval, shape based retrieval, similarity calculation
Procedia PDF Downloads 2611026 Energy Consumption Estimation for Hybrid Marine Power Systems: Comparing Modeling Methodologies
Authors: Kamyar Maleki Bagherabadi, Torstein Aarseth Bø, Truls Flatberg, Olve Mo
Abstract:
Hydrogen fuel cells and batteries are one of the promising solutions aligned with carbon emission reduction goals for the marine sector. However, the higher installation and operation costs of hydrogen-based systems compared to conventional diesel gensets raise questions about the appropriate hydrogen tank size, energy, and fuel consumption estimations. Ship designers need methodologies and tools to calculate energy and fuel consumption for different component sizes to facilitate decision-making regarding feasibility and performance for retrofits and design cases. The aim of this work is to compare three alternative modeling approaches for the estimation of energy and fuel consumption with various hydrogen tank sizes, battery capacities, and load-sharing strategies. A fishery vessel is selected as an example, using logged load demand data over a year of operations. The modeled power system consists of a PEM fuel cell, a diesel genset, and a battery. The methodologies used are: first, an energy-based model; second, considering load variations during the time domain with a rule-based Power Management System (PMS); and third, a load variations model and dynamic PMS strategy based on optimization with perfect foresight. The errors and potentials of the methods are discussed, and design sensitivity studies for this case are conducted. The results show that the energy-based method can estimate fuel and energy consumption with acceptable accuracy. However, models that consider time variation of the load provide more realistic estimations of energy and fuel consumption regarding hydrogen tank and battery size, still within low computational time.Keywords: fuel cell, battery, hydrogen, hybrid power system, power management system
Procedia PDF Downloads 341025 Pricing, Production and Inventory Policies Manufacturing under Stochastic Demand and Continuous Prices
Authors: Masoud Rabbani, Majede Smizadeh, Hamed Farrokhi-Asl
Abstract:
We study jointly determining prices and production in a multiple period horizon under a general non-stationary stochastic demand with continuous prices. In some periods we need to increase capacity of production to satisfy demand. This paper presents a model to aid multi-period production capacity planning by quantifying the trade-off between product quality and production cost. The product quality is estimated as the statistical variation from the target performances obtained from the output tolerances of the production machines that manufacture the components. We consider different tolerance for different machines that use to increase capacity. The production cost is estimated as the total cost of owning and operating a production facility during the planning horizon.so capacity planning has cost that impact on price. Pricing products often turns out to be difficult to measure them because customers have a reservation price to pay that impact on price and demand. We decide to determine prices and production for periods after enhance capacity and consider reservation price to determine price. First we use an algorithm base on fuzzy set of the optimal objective function values to determine capacity planning by determine maximize interval from upper bound in minimum objectives and define weight for objectives. Then we try to determine inventory and pricing policies. We can use a lemma to solve a problem in MATLAB and find exact answer.Keywords: price policy, inventory policy, capacity planning, product quality, epsilon -constraint
Procedia PDF Downloads 5681024 Mobile Traffic Management in Congested Cells using Fuzzy Logic
Authors: A. A. Balkhi, G. M. Mir, Javid A. Sheikh
Abstract:
To cater the demands of increasing traffic with new applications the cellular mobile networks face new changes in deployment in infrastructure for making cellular networks heterogeneous. To reduce overhead processing the densely deployed cells require smart behavior with self-organizing capabilities with high adaptation to the neighborhood. We propose self-organization of unused resources usually excessive unused channels of neighbouring cells with densely populated cells to reduce handover failure rates. The neighboring cells share unused channels after fulfilling some conditional candidature criterion using threshold values so that they are not suffered themselves for starvation of channels in case of any abrupt change in traffic pattern. The cells are classified as ‘red’, ‘yellow’, or ‘green’, as per the available channels in cell which is governed by traffic pattern and thresholds. To combat the deficiency of channels in red cell, migration of unused channels from under-loaded cells, hierarchically from the qualified candidate neighboring cells is explored. The resources are returned back when the congested cell is capable of self-contained traffic management. In either of the cases conditional sharing of resources is executed for enhanced traffic management so that User Equipment (UE) is provided uninterrupted services with high Quality of Service (QoS). The fuzzy logic-based simulation results show that the proposed algorithm is efficiently in coincidence with improved successful handoffs.Keywords: candidate cell, channel sharing, fuzzy logic, handover, small cells
Procedia PDF Downloads 1201023 Automatic Detection and Update of Region of Interest in Vehicular Traffic Surveillance Videos
Authors: Naydelis Brito Suárez, Deni Librado Torres Román, Fernando Hermosillo Reynoso
Abstract:
Automatic detection and generation of a dynamic ROI (Region of Interest) in vehicle traffic surveillance videos based on a static camera in Intelligent Transportation Systems is challenging for computer vision-based systems. The dynamic ROI, being a changing ROI, should capture any other moving object located outside of a static ROI. In this work, the video is represented by a Tensor model composed of a Background and a Foreground Tensor, which contains all moving vehicles or objects. The values of each pixel over a time interval are represented by time series, and some pixel rows were selected. This paper proposes a pixel entropy-based algorithm for automatic detection and generation of a dynamic ROI in traffic videos under the assumption of two types of theoretical pixel entropy behaviors: (1) a pixel located at the road shows a high entropy value due to disturbances in this zone by vehicle traffic, (2) a pixel located outside the road shows a relatively low entropy value. To study the statistical behavior of the selected pixels, detecting the entropy changes and consequently moving objects, Shannon, Tsallis, and Approximate entropies were employed. Although Tsallis entropy achieved very high results in real-time, Approximate entropy showed results slightly better but in greater time.Keywords: convex hull, dynamic ROI detection, pixel entropy, time series, moving objects
Procedia PDF Downloads 731022 Decision Making System for Clinical Datasets
Authors: P. Bharathiraja
Abstract:
Computer Aided decision making system is used to enhance diagnosis and prognosis of diseases and also to assist clinicians and junior doctors in clinical decision making. Medical Data used for decision making should be definite and consistent. Data Mining and soft computing techniques are used for cleaning the data and for incorporating human reasoning in decision making systems. Fuzzy rule based inference technique can be used for classification in order to incorporate human reasoning in the decision making process. In this work, missing values are imputed using the mean or mode of the attribute. The data are normalized using min-ma normalization to improve the design and efficiency of the fuzzy inference system. The fuzzy inference system is used to handle the uncertainties that exist in the medical data. Equal-width-partitioning is used to partition the attribute values into appropriate fuzzy intervals. Fuzzy rules are generated using Class Based Associative rule mining algorithm. The system is trained and tested using heart disease data set from the University of California at Irvine (UCI) Machine Learning Repository. The data was split using a hold out approach into training and testing data. From the experimental results it can be inferred that classification using fuzzy inference system performs better than trivial IF-THEN rule based classification approaches. Furthermore it is observed that the use of fuzzy logic and fuzzy inference mechanism handles uncertainty and also resembles human decision making. The system can be used in the absence of a clinical expert to assist junior doctors and clinicians in clinical decision making.Keywords: decision making, data mining, normalization, fuzzy rule, classification
Procedia PDF Downloads 517