Search results for: rapid compression machine
4560 AI for Efficient Geothermal Exploration and Utilization
Authors: Velimir "monty" Vesselinov, Trais Kliplhuis, Hope Jasperson
Abstract:
Artificial intelligence (AI) is a powerful tool in the geothermal energy sector, aiding in both exploration and utilization. Identifying promising geothermal sites can be challenging due to limited surface indicators and the need for expensive drilling to confirm subsurface resources. Geothermal reservoirs can be located deep underground and exhibit complex geological structures, making traditional exploration methods time-consuming and imprecise. AI algorithms can analyze vast datasets of geological, geophysical, and remote sensing data, including satellite imagery, seismic surveys, geochemistry, geology, etc. Machine learning algorithms can identify subtle patterns and relationships within this data, potentially revealing hidden geothermal potential in areas previously overlooked. To address these challenges, a SIML (Science-Informed Machine Learning) technology has been developed. SIML methods are different from traditional ML techniques. In both cases, the ML models are trained to predict the spatial distribution of an output (e.g., pressure, temperature, heat flux) based on a series of inputs (e.g., permeability, porosity, etc.). The traditional ML (a) relies on deep and wide neural networks (NNs) based on simple algebraic mappings to represent complex processes. In contrast, the SIML neurons incorporate complex mappings (including constitutive relationships and physics/chemistry models). This results in ML models that have a physical meaning and satisfy physics laws and constraints. The prototype of the developed software, called GeoTGO, is accessible through the cloud. Our software prototype demonstrates how different data sources can be made available for processing, executed demonstrative SIML analyses, and presents the results in a table and graphic form.Keywords: science-informed machine learning, artificial inteligence, exploration, utilization, hidden geothermal
Procedia PDF Downloads 524559 Machine Learning Approach for Mutation Testing
Authors: Michael Stewart
Abstract:
Mutation testing is a type of software testing proposed in the 1970s where program statements are deliberately changed to introduce simple errors so that test cases can be validated to determine if they can detect the errors. Test cases are executed against the mutant code to determine if one fails, detects the error and ensures the program is correct. One major issue with this type of testing was it became intensive computationally to generate and test all possible mutations for complex programs. This paper used reinforcement learning and parallel processing within the context of mutation testing for the selection of mutation operators and test cases that reduced the computational cost of testing and improved test suite effectiveness. Experiments were conducted using sample programs to determine how well the reinforcement learning-based algorithm performed with one live mutation, multiple live mutations and no live mutations. The experiments, measured by mutation score, were used to update the algorithm and improved accuracy for predictions. The performance was then evaluated on multiple processor computers. With reinforcement learning, the mutation operators utilized were reduced by 50 – 100%.Keywords: automated-testing, machine learning, mutation testing, parallel processing, reinforcement learning, software engineering, software testing
Procedia PDF Downloads 1974558 An Experimental Machine Learning Analysis on Adaptive Thermal Comfort and Energy Management in Hospitals
Authors: Ibrahim Khan, Waqas Khalid
Abstract:
The Healthcare sector is known to consume a higher proportion of total energy consumption in the HVAC market owing to an excessive cooling and heating requirement in maintaining human thermal comfort in indoor conditions, catering to patients undergoing treatment in hospital wards, rooms, and intensive care units. The indoor thermal comfort conditions in selected hospitals of Islamabad, Pakistan, were measured on a real-time basis with the collection of first-hand experimental data using calibrated sensors measuring Ambient Temperature, Wet Bulb Globe Temperature, Relative Humidity, Air Velocity, Light Intensity and CO2 levels. The Experimental data recorded was analyzed in conjunction with the Thermal Comfort Questionnaire Surveys, where the participants, including patients, doctors, nurses, and hospital staff, were assessed based on their thermal sensation, acceptability, preference, and comfort responses. The Recorded Dataset, including experimental and survey-based responses, was further analyzed in the development of a correlation between operative temperature, operative relative humidity, and other measured operative parameters with the predicted mean vote and adaptive predicted mean vote, with the adaptive temperature and adaptive relative humidity estimated using the seasonal data set gathered for both summer – hot and dry, and hot and humid as well as winter – cold and dry, and cold and humid climate conditions. The Machine Learning Logistic Regression Algorithm was incorporated to train the operative experimental data parameters and develop a correlation between patient sensations and the thermal environmental parameters for which a new ML-based adaptive thermal comfort model was proposed and developed in our study. Finally, the accuracy of our model was determined using the K-fold cross-validation.Keywords: predicted mean vote, thermal comfort, energy management, logistic regression, machine learning
Procedia PDF Downloads 624557 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition
Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can
Abstract:
To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning
Procedia PDF Downloads 834556 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal
Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan
Abstract:
This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal
Procedia PDF Downloads 1114555 Morphological, Mechanical, and Tribological Properties Investigations of CMTed Parts of Al-5356 Alloy
Authors: Antar Bouhank, Youcef Beellal, Samir Adjel, Abdelmadjid Ababsa
Abstract:
This paper investigates the impact of 3D printing parameters using the cold metal transfer (CMT) technique on the morphological, mechanical, and tribological properties of walls and massive parts made from aluminum alloy. The parameters studied include current intensity, torch movement speed, printing increment, and the flow rate of shielding gas. The manufactured parts, using the technique mentioned above, are walls and massive parts with different filling strategies, using grid and zigzag patterns and at different current intensities. The main goal of the article is to find out the welding parameters suitable for having parts with low defects and improved properties from the previously mentioned properties point of view. It has been observed from the results thus obtained that the high current intensity causes rapid solidification, resulting in high porosity and low hardness values. However, the high current intensity can cause very rapid solidification, which increases the melting point, and the part remains in the most stable shape. Furthermore, the results show that there is an evident relationship between hardness, coefficient of friction and wear test where the high intensity is, the low hardness is. The same note is for the coefficient of friction. The micrography of the walls shows a random granular structure with fine grain boundaries with a different grain size. Some interesting results are presented in this paper.Keywords: aluminum alloy, porosity, microstructures, hardness
Procedia PDF Downloads 454554 Effect of Rapid Thermal Annealing on the Optical Properties of InAs Quantum Dots Grown on (100) and (311)B GaAs Substrates by Molecular Beam Epitaxy
Authors: Amjad Almunyif, Amra Alhassni, Sultan Alhassan, Maryam Al Huwayz, Saud Alotaibi, Abdulaziz Almalki, Mohamed Henini
Abstract:
The effect of rapid thermal annealing (RTA) on the optical properties of InAs quantum dots (QDs) grown at an As overpressure of 2x 10⁻⁶ Torr by molecular beam epitaxy (MBE) on (100) and (311)B GaAs substrates was investigated using photoluminescence (PL) technique. PL results showed that for the as-grown samples, the QDs grown on the high index plane (311)B have lower PL intensity and lower full width at half maximum (FWHM) than those grown on the conventional (100) plane. The latter demonstrates that the (311)B QDs have better size uniformity than (100) QDs. Compared with as-grown samples, a blue-shift was observed for all samples with increasing annealing temperature from 600°C to 700°C. For (100) samples, a narrowing of the FWHM was observed with increasing annealing temperature from 600°C to 700°C. However, in (311)B samples, the FWHM showed a different behaviour; it slightly increased when the samples were annealed at 600°C and then decreased when the annealing temperature increased to 700°C. As expected, the PL peak intensity for all samples increased when the laser excitation power increased. The PL peak energy temperature dependence showed a strong redshift when the temperature was increased from 10 K to 120 K. The PL peak energy exhibited an abnormal S-shape behaviour as a function of temperature for all samples. Most samples exhibited a significant enhancement in their activation energies when annealed at 600°C and 700°C, suggesting that annealing annihilated defects created during sample growth. Procedia PDF Downloads 1734553 The Study of Rapid Entire Body Assessment and Quick Exposure Check Correlation in an Engine Oil Company
Authors: Mohammadreza Ashouria, Majid Motamedzadeb
Abstract:
Rapid Entire Body Assessment (REBA) and Quick Exposure Check (QEC) are two general methods to assess the risk factors of work-related musculoskeletal disorders (WMSDs). This study aimed to compare ergonomic risk assessment outputs from QEC and REBA in terms of agreement in distribution of postural loading scores based on analysis of working postures. This cross-sectional study was conducted in an engine oil company in which 40 jobs were studied. A trained occupational health practitioner observed all jobs. Job information was collected to ensure the completion of ergonomic risk assessment tools, including QEC, and REBA. The result revealed that there was a significant correlation between final scores (r=0.731) and the action levels (r =0.893) of two applied methods. Comparison between the action levels and final scores of two methods showed that there was no significant difference among working departments. Most of the studied postures acquired low and moderate risk level in QEC assessment (low risk=20%, moderate risk=50% and High risk=30%) and in REBA assessment (low risk=15%, moderate risk=60% and high risk=25%).There is a significant correlation between two methods. They have a strong correlation in identifying risky jobs and determining the potential risk for incidence of WMSDs. Therefore, there is a possibility for researchers to apply interchangeably both methods, for postural risk assessment in appropriate working environments.Keywords: observational method, QEC, REBA, musculoskeletal disorders
Procedia PDF Downloads 3594552 Caged Compounds as Light-Dependent Initiators for Enzyme Catalysis Reactions
Authors: Emma Castiglioni, Nigel Scrutton, Derren Heyes, Alistair Fielding
Abstract:
By using light as trigger, it is possible to study many biological processes, such as the activity of genes, proteins, and other molecules, with precise spatiotemporal control. Caged compounds, where biologically active molecules are generated from an inert precursor upon laser photolysis, offer the potential to initiate such biological reactions with high temporal resolution. As light acts as the trigger for cleaving the protecting group, the ‘caging’ technique provides a number of advantages as it can be intracellular, rapid and controlled in a quantitative manner. We are developing caging strategies to study the catalytic cycle of a number of enzyme systems, such as nitric oxide synthase and ethanolamine ammonia lyase. These include the use of caged substrates, caged electrons and the possibility of caging the enzyme itself. In addition, we are developing a novel freeze-quench instrument to study these reactions, which combines rapid mixing and flashing capabilities. Reaction intermediates will be trapped at low temperatures and will be analysed by using electron paramagnetic resonance (EPR) spectroscopy to identify the involvement of any radical species during catalysis. EPR techniques typically require relatively long measurement times and very often, low temperatures to fully characterise these short-lived species. Therefore, common rapid mixing techniques, such as stopped-flow or quench-flow are not directly suitable. However, the combination of rapid freeze-quench (RFQ) followed by EPR analysis provides the ideal approach to kinetically trap and spectroscopically characterise these transient radical species. In a typical RFQ experiment, two reagent solutions are delivered to the mixer via two syringes driven by a pneumatic actuator or stepper motor. The new mixed solution is then sprayed into a cryogenic liquid or surface, and the frozen sample is then collected and packed into an EPR tube for analysis. The earliest RFQ instrument consisted of a hydraulic ram unit as a drive unit with direct spraying of the sample into a cryogenic liquid (nitrogen, isopentane or petroleum). Improvements to the RFQ technique have arisen from the design of new mixers in order to reduce both the volume and the mixing time. In addition, the cryogenic isopentane bath has been coupled to a filtering system or replaced by spraying the solution onto a surface that is frozen via thermal conductivity with a cryogenic liquid. In our work, we are developing a novel RFQ instrument which combines the freeze-quench technology with flashing capabilities to enable the studies of both thermally-activated and light-activated biological reactions. This instrument also uses a new rotating plate design based on magnetic couplings and removes the need for mechanical motorised rotation, which can otherwise be problematic at cryogenic temperatures.Keywords: caged compounds, freeze-quench apparatus, photolysis, radicals
Procedia PDF Downloads 2074551 Horizontal Development of Built-up Area and Its Impacts on the Agricultural Land of Peshawar City District (1991-2014)
Authors: Pukhtoon Yar
Abstract:
Peshawar City is experiencing a rapid spatial urban growth primarily as a result of high rate of urbanization along with economic development. This paper was designed to understand the impacts of urbanization on agriculture land use change by particularly focusing on land use change trajectories from the past (1991-2014). We used Landsat imageries (30 meters) for1991along with Spot images (2.5 meters) for year 2014. . The ground truthing of the satellite data was performed by collecting information from Peshawar Development Authority, revenue department, real estate agents and interviews with the officials of city administration. The temporal satellite images were processed by applying supervised maximum likelihood classification technique in ArcGIS 9.3. The procedure resulted into five main classes of land use i.e. built-up area, farmland, barren land, cultivable-wasteland and water bodies. The analysis revealed that, in Peshawar City the built-up environment has been doubled from 8.1 percent in 1991 to over 18.2 percent in 2014 by predominantly encroaching land producing food. Furthermore, the CA-Markov Model predicted that the area under impervious surfaces would continue to flourish during the next three decades. This rapid increase in built-up area is accredited to the lack of proper land use planning and management, which has caused chaotic urban sprawl with detrimental social and environmental consequences.Keywords: Urban Expansion, Land use, GIS, Remote Sensing, Markov Model, Peshawar City
Procedia PDF Downloads 1854550 Jointly Optimal Statistical Process Control and Maintenance Policy for Deteriorating Processes
Authors: Lucas Paganin, Viliam Makis
Abstract:
With the advent of globalization, the market competition has become a major issue for most companies. One of the main strategies to overcome this situation is the quality improvement of the product at a lower cost to meet customers’ expectations. In order to achieve the desired quality of products, it is important to control the process to meet the specifications, and to implement the optimal maintenance policy for the machines and the production lines. Thus, the overall objective is to reduce process variation and the production and maintenance costs. In this paper, an integrated model involving Statistical Process Control (SPC) and maintenance is developed to achieve this goal. Therefore, the main focus of this paper is to develop the jointly optimal maintenance and statistical process control policy minimizing the total long run expected average cost per unit time. In our model, the production process can go out of control due to either the deterioration of equipment or other assignable causes. The equipment is also subject to failures in any of the operating states due to deterioration and aging. Hence, the process mean is controlled by an Xbar control chart using equidistant sampling epochs. We assume that the machine inspection epochs are the times when the control chart signals an out-of-control condition, considering both true and false alarms. At these times, the production process will be stopped, and an investigation will be conducted not only to determine whether it is a true or false alarm, but also to identify the causes of the true alarm, whether it was caused by the change in the machine setting, by other assignable causes, or by both. If the system is out of control, the proper actions will be taken to bring it back to the in-control state. At these epochs, a maintenance action can be taken, which can be no action, or preventive replacement of the unit. When the equipment is in the failure state, a corrective maintenance action is performed, which can be minimal repair or replacement of the machine and the process is brought to the in-control state. SMDP framework is used to formulate and solve the joint control problem. Numerical example is developed to demonstrate the effectiveness of the control policy.Keywords: maintenance, semi-Markov decision process, statistical process control, Xbar control chart
Procedia PDF Downloads 904549 Role of Grey Scale Ultrasound Including Elastography in Grading the Severity of Carpal Tunnel Syndrome - A Comparative Cross-sectional Study
Authors: Arjun Prakash, Vinutha H., Karthik N.
Abstract:
BACKGROUND: Carpal tunnel syndrome (CTS) is a common entrapment neuropathy with an estimated prevalence of 0.6 - 5.8% in the general adult population. It is caused by compression of the Median Nerve (MN) at the wrist as it passes through a narrow osteofibrous canal. Presently, the diagnosis is established by the clinical symptoms and physical examination and Nerve conduction study (NCS) is used to assess its severity. However, it is considered to be painful, time consuming and expensive, with a false-negative rate between 16 - 34%. Ultrasonography (USG) is now increasingly used as a diagnostic tool in CTS due to its non-invasive nature, increased accessibility and relatively low cost. Elastography is a newer modality in USG which helps to assess stiffness of tissues. However, there is limited available literature about its applications in peripheral nerves. OBJECTIVES: Our objectives were to measure the Cross-Sectional Area (CSA) and elasticity of MN at the carpal tunnel using Grey scale Ultrasonography (USG), Strain Elastography (SE) and Shear Wave Elastography (SWE). We also made an attempt to independently evaluate the role of Gray scale USG, SE and SWE in grading the severity of CTS, keeping NCS as the gold standard. MATERIALS AND METHODS: After approval from the Institutional Ethics Review Board, we conducted a comparative cross sectional study for a period of 18 months. The participants were divided into two groups. Group A consisted of 54 patients with clinically diagnosed CTS who underwent NCS, and Group B consisted of 50 controls without any clinical symptoms of CTS. All Ultrasound examinations were performed on SAMSUNG RS 80 EVO Ultrasound machine with 2 - 9 Mega Hertz linear probe. In both groups, CSA of the MN was measured on Grey scale USG, and its elasticity was measured at the carpal tunnel (in terms of Strain ratio and Shear Modulus). The variables were compared between both groups by using ‘Independent t test’, and subgroup analyses were performed using one-way analysis of variance. Receiver operating characteristic curves were used to evaluate the diagnostic performance of each variable. RESULTS: The mean CSA of the MN was 13.60 + 3.201 mm2 and 9.17 + 1.665 mm2 in Group A and Group B, respectively (p < 0.001). The mean SWE was 30.65 + 12.996 kPa and 17.33 + 2.919 kPa in Group A and Group B, respectively (p < 0.001), and the mean Strain ratio was 7.545 + 2.017 and 5.802 + 1.153 in Group A and Group B respectively (p < 0.001). CONCLUSION: The combined use of Gray scale USG, SE and SWE is extremely useful in grading the severity of CTS and can be used as a painless and cost-effective alternative to NCS. Early diagnosis and grading of CTS and effective treatment is essential to avoid permanent nerve damage and functional disability.Keywords: carpal tunnel, ultrasound, elastography, nerve conduction study
Procedia PDF Downloads 1004548 Servitization in Machine and Plant Engineering: Leveraging Generative AI for Effective Product Portfolio Management Amidst Disruptive Innovations
Authors: Till Gramberg
Abstract:
In the dynamic world of machine and plant engineering, stagnation in the growth of new product sales compels companies to reconsider their business models. The increasing shift toward service orientation, known as "servitization," along with challenges posed by digitalization and sustainability, necessitates an adaptation of product portfolio management (PPM). Against this backdrop, this study investigates the current challenges and requirements of PPM in this industrial context and develops a framework for the application of generative artificial intelligence (AI) to enhance agility and efficiency in PPM processes. The research approach of this study is based on a mixed-method design. Initially, qualitative interviews with industry experts were conducted to gain a deep understanding of the specific challenges and requirements in PPM. These interviews were analyzed using the Gioia method, painting a detailed picture of the existing issues and needs within the sector. This was complemented by a quantitative online survey. The combination of qualitative and quantitative research enabled a comprehensive understanding of the current challenges in the practical application of machine and plant engineering PPM. Based on these insights, a specific framework for the application of generative AI in PPM was developed. This framework aims to assist companies in implementing faster and more agile processes, systematically integrating dynamic requirements from trends such as digitalization and sustainability into their PPM process. Utilizing generative AI technologies, companies can more quickly identify and respond to trends and market changes, allowing for a more efficient and targeted adaptation of the product portfolio. The study emphasizes the importance of an agile and reactive approach to PPM in a rapidly changing environment. It demonstrates how generative AI can serve as a powerful tool to manage the complexity of a diversified and continually evolving product portfolio. The developed framework offers practical guidelines and strategies for companies to improve their PPM processes by leveraging the latest technological advancements while maintaining ecological and social responsibility. This paper significantly contributes to deepening the understanding of the application of generative AI in PPM and provides a framework for companies to manage their product portfolios more effectively and adapt to changing market conditions. The findings underscore the relevance of continuous adaptation and innovation in PPM strategies and demonstrate the potential of generative AI for proactive and future-oriented business management.Keywords: servitization, product portfolio management, generative AI, disruptive innovation, machine and plant engineering
Procedia PDF Downloads 814547 Autism Spectrum Disorder Classification Algorithm Using Multimodal Data Based on Graph Convolutional Network
Authors: Yuntao Liu, Lei Wang, Haoran Xia
Abstract:
Machine learning has shown extensive applications in the development of classification models for autism spectrum disorder (ASD) using neural image data. This paper proposes a fusion multi-modal classification network based on a graph neural network. First, the brain is segmented into 116 regions of interest using a medical segmentation template (AAL, Anatomical Automatic Labeling). The image features of sMRI and the signal features of fMRI are extracted, which build the node and edge embedding representations of the brain map. Then, we construct a dynamically updated brain map neural network and propose a method based on a dynamic brain map adjacency matrix update mechanism and learnable graph to further improve the accuracy of autism diagnosis and recognition results. Based on the Autism Brain Imaging Data Exchange I dataset(ABIDE I), we reached a prediction accuracy of 74% between ASD and TD subjects. Besides, to study the biomarkers that can help doctors analyze diseases and interpretability, we used the features by extracting the top five maximum and minimum ROI weights. This work provides a meaningful way for brain disorder identification.Keywords: autism spectrum disorder, brain map, supervised machine learning, graph network, multimodal data, model interpretability
Procedia PDF Downloads 644546 A Method for False Alarm Recognition Based on Multi-Classification Support Vector Machine
Authors: Weiwei Cui, Dejian Lin, Leigang Zhang, Yao Wang, Zheng Sun, Lianfeng Li
Abstract:
Built-in test (BIT) is an important technology in testability field, and it is widely used in state monitoring and fault diagnosis. With the improvement of modern equipment performance and complexity, the scope of BIT becomes larger, and it leads to the emergence of false alarm problem. The false alarm makes the health assessment unstable, and it reduces the effectiveness of BIT. The conventional false alarm suppression methods such as repeated test and majority voting cannot meet the requirement for a complicated system, and the intelligence algorithms such as artificial neural networks (ANN) are widely studied and used. However, false alarm has a very low frequency and small sample, yet a method based on ANN requires a large size of training sample. To recognize the false alarm, we propose a method based on multi-classification support vector machine (SVM) in this paper. Firstly, we divide the state of a system into three states: healthy, false-alarm, and faulty. Then we use multi-classification with '1 vs 1' policy to train and recognize the state of a system. Finally, an example of fault injection system is taken to verify the effectiveness of the proposed method by comparing ANN. The result shows that the method is reasonable and effective.Keywords: false alarm, fault diagnosis, SVM, k-means, BIT
Procedia PDF Downloads 1554545 Community Health Commodities Distribution of integrated HIV and Non-Communicable Disease Services during COVID-19 Pandemic – Eswatini Case Study
Authors: N. Dlamini, Mpumelelo G. Ndlela, Philisiwe Dlamini, Nicholus Kisyeri, Bhekizitha Sithole
Abstract:
Accessing health services during the COVID-19 pandemic have exacerbated scarcity to routine medication. To ensure continuous accessibility to services, Eswatini launched Community Health Commodities Distribution (CHCD). Eligible Antiretroviral Therapy(ART) stable clients (VL<1,000) and patients on Non-Communicable Disease (NCD) medications were attended at community pick up points (PUP) based on distance between clients’ residence and the public health facility. Services provided includes ART and Pre-Exposure prophylaxis (PrEP) refills and NCD drug refills). The number of community PUP was 14% higher than health facility visits. Among all medications and commodities distributed between April and October 2020 at the PUP, 64% were HIV-related (HIV rapid test, HIVST, VL test, PrEP meds), and 36% were NCD related. The rapid roll out of CHCD during COVID-19 pandemic reduced the risk of COVID-19 transmission to clients as travel to health facilities was eliminated. It Additionally increased access to commodities during COVID-19-driven lockdown, decongested health facilities, integrated model of care, and increase service coverage. It was also noted that CHCD added different curative and HIV related services based on client specific needs and availability of the commodities.Keywords: community health commodities distribution, pick up points, antiretroviral therapy, pre-exposure prophylaxis
Procedia PDF Downloads 1344544 Design and Implementation of Machine Learning Model for Short-Term Energy Forecasting in Smart Home Management System
Authors: R. Ramesh, K. K. Shivaraman
Abstract:
The main aim of this paper is to handle the energy requirement in an efficient manner by merging the advanced digital communication and control technologies for smart grid applications. In order to reduce user home load during peak load hours, utility applies several incentives such as real-time pricing, time of use, demand response for residential customer through smart meter. However, this method provides inconvenience in the sense that user needs to respond manually to prices that vary in real time. To overcome these inconvenience, this paper proposes a convolutional neural network (CNN) with k-means clustering machine learning model which have ability to forecast energy requirement in short term, i.e., hour of the day or day of the week. By integrating our proposed technique with home energy management based on Bluetooth low energy provides predicted value to user for scheduling appliance in advanced. This paper describes detail about CNN configuration and k-means clustering algorithm for short-term energy forecasting.Keywords: convolutional neural network, fuzzy logic, k-means clustering approach, smart home energy management
Procedia PDF Downloads 3034543 The Effects of Topically-Applied Skin Moisturizer on Striae Gravidarum in East Indian Women
Authors: Dipanshu Sur, Ratnabali Chakravorty
Abstract:
Background: Striae result from rapid expansion of the underlying tissue, e.g. during puberty, pregnancy or rapid weight gain. Prior data indicate that the incidence of stretch marks in Indian women is 77%.The hormonal and genetic factors are associated with their appearance. Recently that has been found skin extensibility, elasticity and rupture were strongly influenced by the water content of dermis and epidermis cells. Objective: The objectives were to assess the effects of topical treatments applied during pregnancy on the later development of stretch marks. Materials and methods: An open, prospective, randomized study was done on 120 pregnant women in whom skin elasticity and hydration as well as striae presence or apparition were measured at baseline and periodically until delivery. Patients were randomly assigned to application in wet skin cream, or in dry skin conditions. Results: The average basal hydration was 42 ±13 IU and the final was 46 ± 6 IU (P = 0.0325; 95% CI: -7.66 to -0.34), which difference was statistically significant. By measuring the moisture in the control region (forearm) a basal reading of 40 ± 9 IU and end of study of 38 ± 6; (p = 0.1547; 95% CI: -0.77 to 4.77) and this difference was considered to be not statistically significant. It was observed that at the end of the study, 55% women without ridges; mild ridges 5%; 36% moderate, and 4%, severe ridges. The proportion of women without grooves was 54% when the cream was applied studied wet skin and 45% when the cream was applied on dry skin. Conclusion: It was shown that cream under study increased hydration and elasticity of abdominal skin consequently in all subjects. This effect is more significant (54%) when the cream is applied to damp skin.Keywords: striae gravidarum, skin moisturizer, skin hydration, skin elasticity
Procedia PDF Downloads 2174542 Modelling the Behavior of Commercial and Test Textiles against Laundering Process by Statistical Assessment of Their Performance
Authors: M. H. Arslan, U. K. Sahin, H. Acikgoz-Tufan, I. Gocek, I. Erdem
Abstract:
Various exterior factors have perpetual effects on textile materials during wear, use and laundering in everyday life. In accordance with their frequency of use, textile materials are required to be laundered at certain intervals. The medium in which the laundering process takes place have inevitable detrimental physical and chemical effects on textile materials caused by the unique parameters of the process inherently existing. Connatural structures of various textile materials result in many different physical, chemical and mechanical characteristics. Because of their specific structures, these materials have different behaviors against several exterior factors. By modeling the behavior of commercial and test textiles as group-wise against laundering process, it is possible to disclose the relation in between these two groups of materials, which will lead to better understanding of their behaviors in terms of similarities and differences against the washing parameters of the laundering. Thus, the goal of the current research is to examine the behavior of two groups of textile materials as commercial textiles and as test textiles towards the main washing machine parameters during laundering process such as temperature, load quantity, mechanical action and level of water amount by concentrating on shrinkage, pilling, sewing defects, collar abrasion, the other defects other than sewing, whitening and overall properties of textiles. In this study, cotton fabrics were preferred as commercial textiles due to the fact that garments made of cotton are the most demanded products in the market by the textile consumers in daily life. Full factorial experimental set-up was used to design the experimental procedure. All profiles always including all of the commercial and the test textiles were laundered for 20 cycles by commercial home laundering machine to investigate the effects of the chosen parameters. For the laundering process, a modified version of ‘‘IEC 60456 Test Method’’ was utilized. The amount of detergent was altered as 0.5% gram per liter depending on varying load quantity levels. Datacolor 650®, EMPA Photographic Standards for Pilling Test and visual examination were utilized to test and characterize the textiles. Furthermore, in the current study the relation in between commercial and test textiles in terms of their performance was deeply investigated by the help of statistical analysis performed by MINITAB® package program modeling their behavior against the parameters of the laundering process. In the experimental work, the behaviors of both groups of textiles towards washing machine parameters were visually and quantitatively assessed in dry state.Keywords: behavior against washing machine parameters, performance evaluation of textiles, statistical analysis, commercial and test textiles
Procedia PDF Downloads 3584541 Service Information Integration Platform as Decision Making Tools for the Service Industry Supply Chain-Indonesia Service Integration Project
Authors: Haikal Achmad Thaha, Pujo Laksono, Dhamma Nibbana Putra
Abstract:
Customer service is one of the core interest in a service sector of a company, whether as the core business or as service part of the operation. Most of the time, the people and the previous research in service industry is focused on finding the best business model solution for the service sector, usually to decide between total in house customer service, outsourcing, or something in between. Conventionally, to take this decision is some important part of the management job, and this is a process that usually takes some time and staff effort, meanwhile market condition and overall company needs may change and cause loss of income and temporary disturbance in the companies operation . However, in this paper we have offer a new concept model to assist decision making process in service industry. This model will featured information platform as central tool to integrate service industry operation. The result is service information model which would ideally increase response time and effectivity of the decision making. it will also help service industry in switching the service solution system quickly through machine learning when the companies growth and the service solution needed are changing.Keywords: service industry, customer service, machine learning, decision making, information platform
Procedia PDF Downloads 6204540 Prediction of Survival Rate after Gastrointestinal Surgery Based on The New Japanese Association for Acute Medicine (JAAM Score) With Neural Network Classification Method
Authors: Ayu Nabila Kusuma Pradana, Aprinaldi Jasa Mantau, Tomohiko Akahoshi
Abstract:
The incidence of Disseminated intravascular coagulation (DIC) following gastrointestinal surgery has a poor prognosis. Therefore, it is important to determine the factors that can predict the prognosis of DIC. This study will investigate the factors that may influence the outcome of DIC in patients after gastrointestinal surgery. Eighty-one patients were admitted to the intensive care unit after gastrointestinal surgery in Kyushu University Hospital from 2003 to 2021. Acute DIC scores were estimated using the new Japanese Association for Acute Medicine (JAAM) score from before and after surgery from day 1, day 3, and day 7. Acute DIC scores will be compared with The Sequential Organ Failure Assessment (SOFA) score, platelet count, lactate level, and a variety of biochemical parameters. This study applied machine learning algorithms to predict the prognosis of DIC after gastrointestinal surgery. The results of this study are expected to be used as an indicator for evaluating patient prognosis so that it can increase life expectancy and reduce mortality from cases of DIC patients after gastrointestinal surgery.Keywords: the survival rate, gastrointestinal surgery, JAAM score, neural network, machine learning, disseminated intravascular coagulation (DIC)
Procedia PDF Downloads 2554539 Multi-Analyte Indium Gallium Zinc Oxide-Based Dielectric Electrolyte-Insulator-Semiconductor Sensing Membranes
Authors: Chyuan Haur Kao, Hsiang Chen, Yu Sheng Tsai, Chen Hao Hung, Yu Shan Lee
Abstract:
Dielectric electrolyte-insulator-semiconductor sensing membranes-based biosensors have been intensively investigated because of their simple fabrication, low cost, and fast response. However, to enhance their sensing performance, it is worthwhile to explore alternative materials, distinct processes, and novel treatments. An ISFET can be viewed as a variation of MOSFET with the dielectric oxide layer as the sensing membrane. Then, modulation on the work function of the gate caused by electrolytes in various ion concentrations could be used to calculate the ion concentrations. Recently, owing to the advancement of CMOS technology, some high dielectric materials substrates as the sensing membranes of electrolyte-insulator-semiconductor (EIS) structures. The EIS with a stacked-layer of SiO₂ layer between the sensing membrane and the silicon substrate exhibited a high pH sensitivity and good long-term stability. IGZO is a wide-bandgap (~3.15eV) semiconductor of the III-VI semiconductor group with several preferable properties, including good transparency, high electron mobility, wide band gap, and comparable with CMOS technology. IGZO was sputtered by reactive radio frequency (RF) on a p-type silicon wafer with various gas ratios of Ar:O₂ and was treated with rapid thermal annealing in O₂ ambient. The sensing performance, including sensitivity, hysteresis, and drift rate was measured and XRD, XPS, and AFM analyses were also used to study the material properties of the IGZO membrane. Moreover, IGZO was used as a sensing membrane in dielectric EIS bio-sensor structures. In addition to traditional pH sensing capability, detection for concentrations of Na+, K+, urea, glucose, and creatinine was performed. Moreover, post rapid thermal annealing (RTA) treatment was confirmed to improve the material properties and enhance the multi-analyte sensing capability for various ions or chemicals in solutions. In this study, the IGZO sensing membrane with annealing in O₂ ambient exhibited a higher sensitivity, higher linearity, higher H+ selectivity, lower hysteresis voltage and lower drift rate. Results indicate that the IGZO dielectric sensing membrane on the EIS structure is promising for future bio-medical device applications.Keywords: dielectric sensing membrane, IGZO, hydrogen ion, plasma, rapid thermal annealing
Procedia PDF Downloads 2504538 Survival Strategies of Street Children Using the Urban Space: A Case Study at Sealdah Railway Station Area, Kolkata, West Bengal, India
Authors: Sibnath Sarkar
Abstract:
Developing countries are facing many Social problems. In India, too there are several such problems. The problem of street children is one of them. No country or city anywhere in the world today is without the presence of street children, but the problem is most acute in developing countries. Thousands of street children can be seen in our populous cities like Mumbai, Kolkata, Delhi, and Chennai. Most of them are in the age group of 5-15 years. The number of street children is increasing gradually. Poverty, unemployment, rapid urbanization, rural-urban migrations are the root causes of street children. Being deprive from many of their, they have escaped to the street as a safe place for living. Street children always related with the urban spaces in the developing world and it represents a sad outcome of the rapid urbanization process. After coming to the streets, these children have to cope with the new situation every day. They also adopt or develop many complex survival strategies and a variety of different informal or even illegal activities in public space and form supportive social networks in order to survive in street life. Street children use the different suitable urban spaces as their earning, living, entertaining spot. Therefore, the livelihoods of young people on the street should analyze in relation to the spaces they use, as well as their age and length of stay on the streets. This paper tries to explore the livelihood strategies and copping situation of street children in Sealdah station area. One hundred seventy-five street living children are included in the study living in and around the railway station.Keywords: strategies, street children, survive, urban-space
Procedia PDF Downloads 3604537 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction
Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong
Abstract:
Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.Keywords: data refinement, machine learning, mutual information, short-term latency prediction
Procedia PDF Downloads 1674536 Single Imputation for Audiograms
Authors: Sarah Beaver, Renee Bryce
Abstract:
Audiograms detect hearing impairment, but missing values pose problems. This work explores imputations in an attempt to improve accuracy. This work implements Linear Regression, Lasso, Linear Support Vector Regression, Bayesian Ridge, K Nearest Neighbors (KNN), and Random Forest machine learning techniques to impute audiogram frequencies ranging from 125Hz to 8000Hz. The data contains patients who had or were candidates for cochlear implants. Accuracy is compared across two different Nested Cross-Validation k values. Over 4000 audiograms were used from 800 unique patients. Additionally, training on data combines and compares left and right ear audiograms versus single ear side audiograms. The accuracy achieved using Root Mean Square Error (RMSE) values for the best models for Random Forest ranges from 4.74 to 6.37. The R\textsuperscript{2} values for the best models for Random Forest ranges from .91 to .96. The accuracy achieved using RMSE values for the best models for KNN ranges from 5.00 to 7.72. The R\textsuperscript{2} values for the best models for KNN ranges from .89 to .95. The best imputation models received R\textsuperscript{2} between .89 to .96 and RMSE values less than 8dB. We also show that the accuracy of classification predictive models performed better with our best imputation models versus constant imputations by a two percent increase.Keywords: machine learning, audiograms, data imputations, single imputations
Procedia PDF Downloads 804535 Rapid and Efficient Removal of Lead from Water Using Chitosan/Magnetite Nanoparticles
Authors: Othman M. Hakami, Abdul Jabbar Al-Rajab
Abstract:
Occurrence of heavy metals in water resources increased in the recent years albeit at low concentrations. Lead (PbII) is among the most important inorganic pollutants in ground and surface water. However, removal of this toxic metal efficiently from water is of public and scientific concern. In this study, we developed a rapid and efficient removal method of lead from water using chitosan/magnetite nanoparticles. A simple and effective process has been used to prepare chitosan/magnetite nanoparticles (NPs) (CS/Mag NPs) with effect on saturation magnetization value; the particles were strongly responsive to an external magnetic field making separation from solution possible in less than 2 minutes using a permanent magnet and the total Fe in solution was below the detection limit of ICP-OES (<0.19 mg L-1). The hydrodynamic particle size distribution increased from an average diameter of ~60 nm for Fe3O4 NPs to ~75 nm after chitosan coating. The feasibility of the prepared NPs for the adsorption and desorption of Pb(II) from water were evaluated using Chitosan/Magnetite NPs which showed a high removal efficiency for Pb(II) uptake, with 90% of Pb(II) removed during the first 5 minutes and equilibrium in less than 10 minutes. Maximum adsorption capacities for Pb(II) occurred at pH 6.0 and under room temperature were as high as 85.5 mg g-1, according to Langmuir isotherm model. Desorption of adsorbed Pb on CS/Mag NPs was evaluated using deionized water at different pH values ranged from 1 to 7 which was an effective eluent and did not result the destruction of NPs, then, they could subsequently be reused without any loss of their activity in further adsorption tests. Overall, our results showed the high efficiency of chitosan/magnetite nanoparticles (NPs) in lead removal from water in controlled conditions, and further studies should be realized in real field conditions.Keywords: chitosan, magnetite, water, treatment
Procedia PDF Downloads 4014534 Transforming Data Science Curriculum Through Design Thinking
Authors: Samar Swaid
Abstract:
Today, corporates are moving toward the adoption of Design-Thinking techniques to develop products and services, putting their consumer as the heart of the development process. One of the leading companies in Design-Thinking, IDEO (Innovation, Design, Engineering Organization), defines Design-Thinking as an approach to problem-solving that relies on a set of multi-layered skills, processes, and mindsets that help people generate novel solutions to problems. Design thinking may result in new ideas, narratives, objects or systems. It is about redesigning systems, organizations, infrastructures, processes, and solutions in an innovative fashion based on the users' feedback. Tim Brown, president and CEO of IDEO, sees design thinking as a human-centered approach that draws from the designer's toolkit to integrate people's needs, innovative technologies, and business requirements. The application of design thinking has been witnessed to be the road to developing innovative applications, interactive systems, scientific software, healthcare application, and even to utilizing Design-Thinking to re-think business operations, as in the case of Airbnb. Recently, there has been a movement to apply design thinking to machine learning and artificial intelligence to ensure creating the "wow" effect on consumers. The Association of Computing Machinery task force on Data Science program states that" Data scientists should be able to implement and understand algorithms for data collection and analysis. They should understand the time and space considerations of algorithms. They should follow good design principles developing software, understanding the importance of those principles for testability and maintainability" However, this definition hides the user behind the machine who works on data preparation, algorithm selection and model interpretation. Thus, the Data Science program includes design thinking to ensure meeting the user demands, generating more usable machine learning tools, and developing ways of framing computational thinking. Here, describe the fundamentals of Design-Thinking and teaching modules for data science programs.Keywords: data science, design thinking, AI, currculum, transformation
Procedia PDF Downloads 794533 Methods for Enhancing Ensemble Learning or Improving Classifiers of This Technique in the Analysis and Classification of Brain Signals
Authors: Seyed Mehdi Ghezi, Hesam Hasanpoor
Abstract:
This scientific article explores enhancement methods for ensemble learning with the aim of improving the performance of classifiers in the analysis and classification of brain signals. The research approach in this field consists of two main parts, each with its own strengths and weaknesses. The choice of approach depends on the specific research question and available resources. By combining these approaches and leveraging their respective strengths, researchers can enhance the accuracy and reliability of classification results, consequently advancing our understanding of the brain and its functions. The first approach focuses on utilizing machine learning methods to identify the best features among the vast array of features present in brain signals. The selection of features varies depending on the research objective, and different techniques have been employed for this purpose. For instance, the genetic algorithm has been used in some studies to identify the best features, while optimization methods have been utilized in others to identify the most influential features. Additionally, machine learning techniques have been applied to determine the influential electrodes in classification. Ensemble learning plays a crucial role in identifying the best features that contribute to learning, thereby improving the overall results. The second approach concentrates on designing and implementing methods for selecting the best classifier or utilizing meta-classifiers to enhance the final results in ensemble learning. In a different section of the research, a single classifier is used instead of multiple classifiers, employing different sets of features to improve the results. The article provides an in-depth examination of each technique, highlighting their advantages and limitations. By integrating these techniques, researchers can enhance the performance of classifiers in the analysis and classification of brain signals. This advancement in ensemble learning methodologies contributes to a better understanding of the brain and its functions, ultimately leading to improved accuracy and reliability in brain signal analysis and classification.Keywords: ensemble learning, brain signals, classification, feature selection, machine learning, genetic algorithm, optimization methods, influential features, influential electrodes, meta-classifiers
Procedia PDF Downloads 744532 Impact of Rapid Urbanization on Health Sector in India
Authors: Madhvi Bhayani
Abstract:
Introduction: Due to the rapid pace of urbanization, the urban health issues have become one of the significant threats to future development in India. It also poses serious repercussions on the citizen’s health. As urbanization in India is increasing at an unprecedented rate and it has generated the urban health crisis among the city dwellers especially the urban poor. The increasing proportion of the urban poor and vulnerable to the health indicators worse than the rural counterparts, they face social and financial barriers in accessing healthcare services and these conditions make human health at risk. The Local as well as the State and National governments are alike tackling with the challenges of urbanization as it has become very essential for the government to provide the basic necessities and better infrastructure that make life in cities safe and healthy. Thus, the paper argues that if no major realistic steps are taken with immediate effect, the citizens will face a huge burden of health hazards. Aim: This paper attempts to analyze the current infrastructure, government planning, and its future policy, it also discusses the challenges and outcomes of urbanization on health and its impact on it and it will also predict the future trend with regard to disease burden in the urban areas. Methods: The paper analyzes on the basis of the secondary data by taking into consideration the connection between the Rapid Urbanization and Public Health Challenges, health and health care system and its services delivery to the citizens especially to the urban poor. Extensive analyses of government census reports, health information and policy, the government health-related schemes, urban development and based on the past trends, the future status of urban infrastructure and health outcomes are predicted. The social-economic and political dimensions are also taken into consideration from regional, national and global perspectives, which are incorporated in the paper to make realistic predictions for the future. Findings and Conclusion: The findings of the paper show that India suffers a lot due to the double burden of rapidly increasing in diseases and also growing health inequalities and disparities in health outcomes. Existing tools of governance of urban health are falling short to provide the better health care services. They need to strengthen the collaboration and communication among the state, national and local governments and also with the non-governmental partners. Based on the findings the policy implications are then described and areas for future research are defined.Keywords: health care, urbanization, urban health, service delivery
Procedia PDF Downloads 2094531 A Predictive Model for Turbulence Evolution and Mixing Using Machine Learning
Authors: Yuhang Wang, Jorg Schluter, Sergiy Shelyag
Abstract:
The high cost associated with high-resolution computational fluid dynamics (CFD) is one of the main challenges that inhibit the design, development, and optimisation of new combustion systems adapted for renewable fuels. In this study, we propose a physics-guided CNN-based model to predict turbulence evolution and mixing without requiring a traditional CFD solver. The model architecture is built upon U-Net and the inception module, while a physics-guided loss function is designed by introducing two additional physical constraints to allow for the conservation of both mass and pressure over the entire predicted flow fields. Then, the model is trained on the Large Eddy Simulation (LES) results of a natural turbulent mixing layer with two different Reynolds number cases (Re = 3000 and 30000). As a result, the model prediction shows an excellent agreement with the corresponding CFD solutions in terms of both spatial distributions and temporal evolution of turbulent mixing. Such promising model prediction performance opens up the possibilities of doing accurate high-resolution manifold-based combustion simulations at a low computational cost for accelerating the iterative design process of new combustion systems.Keywords: computational fluid dynamics, turbulence, machine learning, combustion modelling
Procedia PDF Downloads 89