Search results for: adaptive thermal comfort model
16603 Bankruptcy Prediction Analysis on Mining Sector Companies in Indonesia
Authors: Devina Aprilia Gunawan, Tasya Aspiranti, Inugrah Ratia Pratiwi
Abstract:
This research aims to classify the mining sector companies based on Altman’s Z-score model, and providing an analysis based on the Altman’s Z-score model’s financial ratios to provide a picture about the financial condition in mining sector companies in Indonesia and their viability in the future, and to find out the partial and simultaneous impact of each of the financial ratio variables in the Altman’s Z-score model, namely (WC/TA), (RE/TA), (EBIT/TA), (MVE/TL), and (S/TA), toward the financial condition represented by the Z-score itself. Among 38 mining sector companies listed in Indonesia Stock Exchange (IDX), 28 companies are selected as research sample according to the purposive sampling criteria.The results of this research showed that during 3 years research period at 2010-2012, the amount of the companies that was predicted to be healthy in each year was less than half of the total sample companies and not even reach up to 50%. The multiple regression analysis result showed that all of the research hypotheses are accepted, which means that (WC/TA), (RE/TA), (EBIT/TA), (MVE/TL), and (S/TA), both partially and simultaneously had an impact towards company’s financial condition.Keywords: Altman’s Z-score model, financial condition, mining companies, Indonesia
Procedia PDF Downloads 53016602 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model
Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin
Abstract:
Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.Keywords: anomaly detection, autoencoder, data centers, deep learning
Procedia PDF Downloads 19816601 Model and Neural Control of the Depth of Anesthesia during Surgery
Authors: Javier Fernandez, Mayte Medina, Rafael Fernandez de Canete, Nuria Alcain, Juan Carlos Ramos-Diaz
Abstract:
At present, the experimentation of anesthetic drugs on patients requires a regulation protocol, and the response of each patient to several doses of entry drug must be well known. Therefore, the development of pharmacological dose control systems is a promising field of research in anesthesiology. In this paper, it has been developed a non-linear compartmental the pharmacokinetic-pharmacodynamical model which describes the anesthesia depth effect in a sufficiently reliable way over a set of patients with the depth effect quantified by the Bi-Spectral Index. Afterwards, an Artificial Neural Network (ANN) predictive controller has been designed based on the depth of anesthesia model so as to keep the patient in the optimum condition while he undergoes surgical treatment. For the purpose of quantifying the efficiency of the neural predictive controller, a classical proportional-integral-derivative controller has also been developed to compare both strategies. Results show the superior performance of predictive neural controller during BiSpectral Index reference tracking.Keywords: anesthesia, bi-spectral index, neural network control, pharmacokinetic-pharmacodynamical model
Procedia PDF Downloads 34016600 Developing a Model for Information Giving Behavior in Virtual Communities
Authors: Pui-Lai To, Chechen Liao, Tzu-Ling Lin
Abstract:
Virtual communities have created a range of new social spaces in which to meet and interact with one another. Both as a stand-alone model or as a supplement to sustain competitive advantage for normal business models, building virtual communities has been hailed as one of the major strategic innovations of the new economy. However for a virtual community to evolve, the biggest challenge is how to make members actively give information or provide advice. Even in busy virtual communities, usually, only a small fraction of members post information actively. In order to investigate the determinants of information giving willingness of those contributors who usually actively provide their opinions, we proposed a model to understand the reasons for contribution in communities. The study will definitely serve as a basis for the future growth of information giving in virtual communities.Keywords: information giving, social identity, trust, virtual community
Procedia PDF Downloads 32616599 Numerical Simulations on Feasibility of Stochastic Model Predictive Control for Linear Discrete-Time Systems with Random Dither Quantization
Authors: Taiki Baba, Tomoaki Hashimoto
Abstract:
The random dither quantization method enables us to achieve much better performance than the simple uniform quantization method for the design of quantized control systems. Motivated by this fact, the stochastic model predictive control method in which a performance index is minimized subject to probabilistic constraints imposed on the state variables of systems has been proposed for linear feedback control systems with random dither quantization. In other words, a method for solving optimal control problems subject to probabilistic state constraints for linear discrete-time control systems with random dither quantization has been already established. To our best knowledge, however, the feasibility of such a kind of optimal control problems has not yet been studied. Our objective in this paper is to investigate the feasibility of stochastic model predictive control problems for linear discrete-time control systems with random dither quantization. To this end, we provide the results of numerical simulations that verify the feasibility of stochastic model predictive control problems for linear discrete-time control systems with random dither quantization.Keywords: model predictive control, stochastic systems, probabilistic constraints, random dither quantization
Procedia PDF Downloads 28316598 A Comparative Evaluation of the SIR and SEIZ Epidemiological Models to Describe the Diffusion Characteristics of COVID-19 Polarizing Viewpoints on Online
Authors: Maryam Maleki, Esther Mead, Mohammad Arani, Nitin Agarwal
Abstract:
This study is conducted to examine how opposing viewpoints related to COVID-19 were diffused on Twitter. To accomplish this, six datasets using two epidemiological models, SIR (Susceptible, Infected, Recovered) and SEIZ (Susceptible, Exposed, Infected, Skeptics), were analyzed. The six datasets were chosen because they represent opposing viewpoints on the COVID-19 pandemic. Three of the datasets contain anti-subject hashtags, while the other three contain pro-subject hashtags. The time frame for all datasets is three years, starting from January 2020 to December 2022. The findings revealed that while both models were effective in evaluating the propagation trends of these polarizing viewpoints, the SEIZ model was more accurate with a relatively lower error rate (6.7%) compared to the SIR model (17.3%). Additionally, the relative error for both models was lower for anti-subject hashtags compared to pro-subject hashtags. By leveraging epidemiological models, insights into the propagation trends of polarizing viewpoints on Twitter were gained. This study paves the way for the development of methods to prevent the spread of ideas that lack scientific evidence while promoting the dissemination of scientifically backed ideas.Keywords: mathematical modeling, epidemiological model, seiz model, sir model, covid-19, twitter, social network analysis, social contagion
Procedia PDF Downloads 7316597 Dynamics of Adiabatic Rapid Passage in an Open Rabi Dimer Model
Authors: Justin Zhengjie Tan, Yang Zhao
Abstract:
Adiabatic Rapid Passage, a popular method of achieving population inversion, is studied in a Rabi dimer model in the presence of noise which acts as a dissipative environment. The integration of the multi-Davydov D2 Ansatz into the time-dependent variational framework enables us to model the intricate quantum system accurately. By influencing the system with a driving field strength resonant with the energy spacing, the probability of adiabatic rapid passage, which is modelled after the Landau Zener model, can be derived along with several other observables, such as the photon population. The effects of a dissipative environment can be reproduced by coupling the system to a common phonon mode. By manipulating the strength and frequency of the driving field, along with the coupling strength of the phonon mode to the qubits, we are able to control the qubits and photon dynamics and subsequently increase the probability of Adiabatic Rapid Passage happening.Keywords: quantum electrodynamics, adiabatic rapid passage, Landau-Zener transitions, dissipative environment
Procedia PDF Downloads 8916596 Comparison of ANFIS Update Methods Using Genetic Algorithm, Particle Swarm Optimization, and Artificial Bee Colony
Authors: Michael R. Phangtriastu, Herriyandi Herriyandi, Diaz D. Santika
Abstract:
This paper presents a comparison of the implementation of metaheuristic algorithms to train the antecedent parameters and consequence parameters in the adaptive network-based fuzzy inference system (ANFIS). The algorithms compared are genetic algorithm (GA), particle swarm optimization (PSO), and artificial bee colony (ABC). The objective of this paper is to benchmark well-known metaheuristic algorithms. The algorithms are applied to several data set with different nature. The combinations of the algorithms' parameters are tested. In all algorithms, a different number of populations are tested. In PSO, combinations of velocity are tested. In ABC, a different number of limit abandonment are tested. Experiments find out that ABC is more reliable than other algorithms, ABC manages to get better mean square error (MSE) than other algorithms in all data set.Keywords: ANFIS, artificial bee colony, genetic algorithm, metaheuristic algorithm, particle swarm optimization
Procedia PDF Downloads 35616595 Test of Moisture Sensor Activation Speed
Authors: I. Parkova, A. Vališevskis, A. Viļumsone
Abstract:
Nocturnal enuresis or bed-wetting is intermittent incontinence during sleep of children after age 5 that may precipitate wide range of behavioural and developmental problems. One of the non-pharmacological treatment methods is the use of a bed-wetting alarm system. In order to improve comfort conditions of nocturnal enuresis alarm system, modular moisture sensor should be replaced by a textile sensor. In this study behaviour and moisture detection speed of woven and sewn sensors were compared by analysing change in electrical resistance after solution (salt water) was dripped on sensor samples. Material of samples has different structure and yarn location, which affects solution detection rate. Sensor system circuit was designed and two sensor tests were performed: system activation test and false alarm test to determine the sensitivity of the system and activation threshold. Sewn sensor had better result in system’s activation test – faster reaction, but woven sensor had better result in system’s false alarm test – it was less sensitive to perspiration simulation. After experiments it was found that the optimum switching threshold is 3V in case of 5V input voltage, which provides protection against false alarms, for example – during intensive sweating.Keywords: conductive yarns, moisture textile sensor, industry, material
Procedia PDF Downloads 24916594 Adsorption of Cd2+ from Aqueous Solutions Using Chitosan Obtained from a Mixture of Littorina littorea and Achatinoidea Shells
Authors: E. D. Paul, O. F. Paul, J. E. Toryila, A. J. Salifu, C. E. Gimba
Abstract:
Adsorption of Cd2+ ions from aqueous solution by Chitosan, a natural polymer, obtained from a mixture of the exoskeletons of Littorina littorea (Periwinkle) and Achatinoidea (Snail) was studied at varying adsorbent dose, contact time, metal ion concentrations, temperature and pH using batch adsorption method. The equilibrium adsorption isotherms were determined between 298 K and 345 K. The adsorption data were adjusted to Langmuir, Freundlich and the pseudo second order kinetic models. It was found that the Langmuir isotherm model most fitted the experimental data, with a maximum monolayer adsorption of 35.1 mgkg⁻¹ at 308 K. The entropy and enthalpy of adsorption were -0.1121 kJmol⁻¹K⁻¹ and -11.43 kJmol⁻¹ respectively. The Freundlich adsorption model, gave Kf and n values consistent with good adsorption. The pseudo-second order reaction model gave a straight line plot with rate constant of 1.291x 10⁻³ kgmg⁻¹ min⁻¹. The qe value was 21.98 mgkg⁻¹, indicating that the adsorption of Cadmium ion by the chitosan composite followed the pseudo-second order kinetic model.Keywords: adsorption, chitosan, littorina littorea, achatinoidea, natural polymer
Procedia PDF Downloads 41016593 Developing a Green Strategic Management Model with regarding HSE-MS
Authors: Amin Padash, Gholam Reza Nabi Bid Hendi, Hassan Hoveidi
Abstract:
Purpose: The aim of this research is developing a model for green management based on Health, Safety and Environmental Management System. An HSE-MS can be a powerful tool for organizations to both improve their environmental, health and safety performance, and enhance their business efficiency to green management. Model: The model is developed in this study can be used for industries as guidelines for implementing green management issue by considering Health, Safety and Environmental Management System. Case Study: The Pars Special Economic / Energy Zone Organization on behalf of Iran’s Petroleum Ministry and National Iranian Oil Company (NIOC) manages and develops the South and North oil and gas fields in the region. Methodology: This research according to objective is applied and based on implementing is descriptive and also prescription. We used technique MCDM (Multiple Criteria Decision-Making) for determining the priorities of the factors. Based on process approach the model consists of the following steps and components: first factors involved in green issues are determined. Based on them a framework is considered. Then with using MCDM (Multiple Criteria Decision-Making) algorithms (TOPSIS) the priority of basic variables are determined. The authors believe that the proposed model and results of this research can aid industries managers to implement green subjects according to Health, Safety and Environmental Management System in a more efficient and effective manner. Finding and conclusion: Basic factors involved in green issues and their weights can be the main finding. Model and relation between factors are the other finding of this research. The case is considered Petrochemical Company for promoting the system of ecological industry thinking.Keywords: Fuzzy-AHP method , green management, health, safety and environmental management system, MCDM technique, TOPSIS
Procedia PDF Downloads 41516592 Service Interactions Coordination Using a Declarative Approach: Focuses on Deontic Rule from Semantics of Business Vocabulary and Rules Models
Authors: Nurulhuda A. Manaf, Nor Najihah Zainal Abidin, Nur Amalina Jamaludin
Abstract:
Coordinating service interactions are a vital part of developing distributed applications that are built up as networks of autonomous participants, e.g., software components, web services, online resources, involve a collaboration between a diverse number of participant services on different providers. The complexity in coordinating service interactions reflects how important the techniques and approaches require for designing and coordinating the interaction between participant services to ensure the overall goal of a collaboration between participant services is achieved. The objective of this research is to develop capability of steering a complex service interaction towards a desired outcome. Therefore, an efficient technique for modelling, generating, and verifying the coordination of service interactions is developed. The developed model describes service interactions using service choreographies approach and focusing on a declarative approach, advocating an Object Management Group (OMG) standard, Semantics of Business Vocabulary and Rules (SBVR). This model, namely, SBVR model for service choreographies focuses on a declarative deontic rule expressing both obligation and prohibition, which can be more useful in working with coordinating service interactions. The generated SBVR model is then be formulated and be transformed into Alloy model using Alloy Analyzer for verifying the generated SBVR model. The transformation of SBVR into Alloy allows to automatically generate the corresponding coordination of service interactions (service choreography), hence producing an immediate instance of execution that satisfies the constraints of the specification and verifies whether a specific request can be realised in the given choreography in the generated choreography.Keywords: service choreography, service coordination, behavioural modelling, complex interactions, declarative specification, verification, model transformation, semantics of business vocabulary and rules, SBVR
Procedia PDF Downloads 15716591 A Phase Field Approach to Model Crack Interface Interaction in Ceramic Matrix Composites
Authors: Dhaladhuli Pranavi, Amirtham Rajagopal
Abstract:
There are various failure modes in ceramic matrix composites; notable ones are fiber breakage, matrix cracking and fiber matrix debonding. Crack nucleation and propagation in microstructure of such composites requires an understanding of interaction of crack with the multiple inclusion heterogeneous system and interfaces. In order to assess structural integrity, the material parameters especially of the interface that governs the crack growth should be determined. In the present work, a nonlocal phase field approach is proposed to model the crack interface interaction in such composites. Nonlocal approaches help in understanding the complex mechanisms of delamination growth and mitigation and operates at a material length scale. The performance of the proposed formulation is illustrated through representative numerical examples. The model proposed is implemented in the framework of the finite element method. Several parametric studies on interface crack interaction are conducted. The proposed model is easy and simple to implement and works very well in modeling fracture in composite systems.Keywords: composite, interface, nonlocal, phase field
Procedia PDF Downloads 14416590 Application of Seasonal Autoregressive Integrated Moving Average Model for Forecasting Monthly Flows in Waterval River, South Africa
Authors: Kassahun Birhanu Tadesse, Megersa Olumana Dinka
Abstract:
Reliable future river flow information is basic for planning and management of any river systems. For data scarce river system having only a river flow records like the Waterval River, a univariate time series models are appropriate for river flow forecasting. In this study, a univariate Seasonal Autoregressive Integrated Moving Average (SARIMA) model was applied for forecasting Waterval River flow using GRETL statistical software. Mean monthly river flows from 1960 to 2016 were used for modeling. Different unit root tests and Mann-Kendall trend analysis were performed to test the stationarity of the observed flow time series. The time series was differenced to remove the seasonality. Using the correlogram of seasonally differenced time series, different SARIMA models were identified, their parameters were estimated, and diagnostic check-up of model forecasts was performed using white noise and heteroscedasticity tests. Finally, based on minimum Akaike Information (AIc) and Hannan-Quinn (HQc) criteria, SARIMA (3, 0, 2) x (3, 1, 3)12 was selected as the best model for Waterval River flow forecasting. Therefore, this model can be used to generate future river information for water resources development and management in Waterval River system. SARIMA model can also be used for forecasting other similar univariate time series with seasonality characteristics.Keywords: heteroscedasticity, stationarity test, trend analysis, validation, white noise
Procedia PDF Downloads 20916589 Emulsified Oil Removal in Produced Water by Graphite-Based Adsorbents Using Adsorption Coupled with Electrochemical Regeneration
Authors: Zohreh Fallah, Edward P. L. Roberts
Abstract:
One of the big challenges for produced water treatment is removing oil from water in the form of emulsified droplets which are not easily separated. An attractive approach is adsorption, as it is a simple and effective process. However, adsorbents must be regenerated in order to make the process cost effective. Several sorbents have been tested for treating oily wastewater. However, some issues such as high energy consumption for activated carbon thermal regeneration have been reported. Due to their significant electrical conductivity, Graphite Intercalation Compounds (GIC) were found to be suitable to be regenerated electrochemically. They are non-porous materials with low surface area and fast adsorptive capacity which are useful for removal of low concentration of organics. An innovative adsorption/regeneration process has been developed at the University of Manchester in which adsorption of organics are done by using a patented GIC adsorbent coupled with subsequent electrochemical regeneration. The oxidation of adsorbed organics enables 100% regeneration so that the adsorbent can be reused over multiple adsorption cycles. GIC adsorbents are capable of removing a wide range of organics and pollutants; however, no comparable report is available for removal of emulsified oil in produced water using abovementioned process. In this study the performance of this technology for the removal of emulsified oil in wastewater was evaluated. Batch experiments were carried out to determine the adsorption kinetics and equilibrium isotherm for both real produced water and model emulsions. The amount of oil in wastewater was measured by using the toluene extraction/fluorescence analysis before and after adsorption and electrochemical regeneration cycles. It was found that oil in water emulsion could be successfully treated by the treatment process and More than 70% of oil was removed.Keywords: adsorption, electrochemical regeneration, emulsified oil, produced water
Procedia PDF Downloads 58416588 Exploring the Effect of Using Lesh Model in Enhancing Prospective Mathematics Teachers’ Number Sense
Authors: Areej Isam Barham
Abstract:
Developing students’ number sense is an essential element in the learning of mathematics. Number sense is one of the foundational ideas in mathematics where students need to understand numbers, representing them in different ways, and realize the relationships among numbers. Number sense also reflects students’ understanding of the meaning of operations, how they related to one another, how to compute fluently and make reasonable estimates. Developing students’ number sense in the mathematics classroom requires good preparation for mathematics teachers, those who will direct their students towards the real understanding of numbers and its implementation in the learning of mathematics. This study describes the development of elementary prospective mathematics teachers’ number sense through a mathematics teaching methods course at Qatar University. The study examined the effect of using the Lesh model in enhancing mathematics prospective teachers’ number sense. Thirty-nine elementary prospective mathematics teachers involved in the current study. The study followed an experimental research approach, and quantitative research methods were used to answer the research questions. Pre-post number sense test was constructed and implemented before and after teaching by using the Lesh model. Data were analyzed using Statistical Packages for Social Sciences (SPSS). Descriptive data analysis and t-test were used to examine the impact of using the Lesh model in enhancing prospective teachers’ number sense. Finding of the study indicated poor number sense and limited numeracy skills before implementing the use of the Lesh model, which highly demonstrate the importance of the study. The results of the study also revealed a positive impact on the use of the Lesh model in enhancing prospective teachers’ number sense with statistically significant differences. The discussion of the study addresses different features and issues related to the participants’ number sense. In light of the study, the research presents recommendations and suggestions for the future development of mathematics prospective teachers’ number sense.Keywords: number sense, Lesh model, prospective mathematics teachers, development of number sense
Procedia PDF Downloads 14416587 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates
Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe
Abstract:
Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.Keywords: machine learning, MTB, WGS, drug resistant TB
Procedia PDF Downloads 5616586 Common Used Non-Medical Practice and Perceived Benefits in Couples with Fertility Problems in Turkey
Authors: S. Fata, M. A. Tokat, N. Bagardi, B. Yilmaz
Abstract:
Nowadays, various traditional practices are used throughout the world with aim to improve fertility. Various traditional remedies, acupuncture, religious practices such as sacrifice are frequently used. Studies often evaluate the traditional practices used by the women. But the use of this non-medical practice by couples and specific application reasons of this methods has been less investigated. The aim of this study was to evaluate the common used non-medical practices and determine perceived benefits by couples with fertility problems in Turkey. This is a descriptive study. Research data were collected between May-July 2016, in Izmir Ege Birth Education and Research Hospital Assisted Reproduction Clinic, from 151 couples with fertility problem. Personal Information Form and Non-Medical Practices Used for Fertility Evaluation Form was used. Number 'GOA 2649' permission letter from Dokuz Eylul University Non-Invasive Research Ethics Board, permission letter from the institution and the written consent from participants has been received to carry out the study. In the evaluation of the data, frequencies and proportions analysis were used. The average age of women participating in the study was 32.87, the 35.8% were high school graduates, 60.3% were housewife and the 58.9% lived in city. The 30.5% of husbands were high school graduates, the 96.7% were employed and the 60.9% lived in city. The 78.1% of couples lived as a nuclear family, the average marriage year was 7.58, in 33.8% the fertility problem stems from women, 42.4% of them received a diagnosis for 1-2 years, 35.1% were being treated for 1-2 years. The 35.8% of women reported use of non-medical applications. The 24.4% of women used figs, onion cure, hacemat, locust, bee-pollen milk, the 18.2% used herbs, the 13.1% vowed, the 12.1% went to the tomb, the 10.1% did not bath a few days after the embryo transfer, the 9.1% used thermal water baths, the 5.0% manually corrected the womb, the 5.0% printed amulets by Hodja, the 3.0% went to the Hodja/pilgrims. Among the perceived benefits of using non-medical practices; facilitate pregnancy and implantation, improve oocyte quality were the most recently expressed. Women said that they often used herbs to develop follicles, did not bath after embryo transfer with aim to provide implantation, and used thermal waters to get rid of the infection. Compared to women, only the 25.8% of men used the non-medical practice. The 52.1% reported that they used peanuts, hacemat, locust, bee-pollen milk, the 14.9% used herbs, the 12.8% vowed, the 10.1% went to the tomb, the 10.1% used thermal water baths. Improve sperm number, motility and quality were the most expected benefits. Men said that they often used herbs to improve sperm number, used peanuts, hacemat, locust, bee-pollen milk to improve sperm motility and quality. Couples in Turkey often use non-medical practices to deal with fertility problems. Some of the practices considered as useful can adversely affect health. Healthcare providers should evaluate the use of non-medical practices and should inform if the application is known adverse effects on health.Keywords: fertility, couples, non-medical practice, perceived benefit
Procedia PDF Downloads 34416585 Revealing the Urban Heat Island: Investigating its Spatial and Temporal Changes and Relationship with Air Quality
Authors: Aneesh Mathew, Arunab K. S., Atul Kumar Sharma
Abstract:
The uncontrolled rise in population has led to unplanned, swift, and unsustainable urban expansion, causing detrimental environmental impacts on both local and global ecosystems. This research delves into a comprehensive examination of the Urban Heat Island (UHI) phenomenon in Bengaluru and Hyderabad, India. It centers on the spatial and temporal distribution of UHI and its correlation with air pollutants. Conducted across summer and winter seasons from 2001 to 2021 in Bangalore and Hyderabad, this study discovered that UHI intensity varies seasonally, peaking in summer and decreasing in winter. The annual maximum UHI intensities range between 4.65 °C to 6.69 °C in Bengaluru and 5.74 °C to 6.82 °C in Hyderabad. Bengaluru particularly experiences notable fluctuations in average UHI intensity. Introducing the Urban Thermal Field Variance Index (UTFVI), the study indicates a consistent strong UHI effect in both cities, significantly impacting living conditions. Moreover, hotspot analysis demonstrates a rising trend in UHI-affected areas over the years in Bengaluru and Hyderabad. This research underscores the connection between air pollutant concentrations and land surface temperature (LST), highlighting the necessity of comprehending UHI dynamics for urban environmental management and public health. It contributes to a deeper understanding of UHI patterns in swiftly urbanizing areas, providing insights into the intricate relationship between urbanization, climate, and air quality. These findings serve as crucial guidance for policymakers, urban planners, and researchers, facilitating the development of innovative, sustainable strategies to mitigate the adverse impacts of uncontrolled expansion while promoting the well-being of local communities and the global environment.Keywords: urban heat island effect, land surface temperature, air pollution, urban thermal field variance index
Procedia PDF Downloads 8516584 An Application of the Single Equation Regression Model
Authors: S. K. Ashiquer Rahman
Abstract:
Recently, oil has become more influential in almost every economic sector as a key material. As can be seen from the news, when there are some changes in an oil price or OPEC announces a new strategy, its effect spreads to every part of the economy directly and indirectly. That’s a reason why people always observe the oil price and try to forecast the changes of it. The most important factor affecting the price is its supply which is determined by the number of wildcats drilled. Therefore, a study about the number of wellheads and other economic variables may give us some understanding of the mechanism indicated by the amount of oil supplies. In this paper, we will consider a relationship between the number of wellheads and three key factors: the price of the wellhead, domestic output, and GNP constant dollars. We also add trend variables in the models because the consumption of oil varies from time to time. Moreover, this paper will use an econometrics method to estimate parameters in the model, apply some tests to verify the result we acquire, and then conclude the model.Keywords: price, domestic output, GNP, trend variable, wildcat activity
Procedia PDF Downloads 6616583 Effect of Baffles on the Cooling of Electronic Components
Authors: O. Bendermel, C. Seladji, M. Khaouani
Abstract:
In this work, we made a numerical study of the thermal and dynamic behaviour of air in a horizontal channel with electronic components. The influence to use baffles on the profiles of velocity and temperature is discussed. The finite volume method and the algorithm Simple are used for solving the equations of conservation of mass, momentum and energy. The results found show that baffles improve heat transfer between the cooling air and electronic components. The velocity will increase from 3 times per rapport of the initial velocity.Keywords: electronic components, baffles, cooling, fluids engineering
Procedia PDF Downloads 30016582 Multiscale Process Modeling Analysis for the Prediction of Composite Strength Allowables
Authors: Marianna Maiaru, Gregory M. Odegard
Abstract:
During the processing of high-performance thermoset polymer matrix composites, chemical reactions occur during elevated pressure and temperature cycles, causing the constituent monomers to crosslink and form a molecular network that gradually can sustain stress. As the crosslinking process progresses, the material naturally experiences a gradual shrinkage due to the increase in covalent bonds in the network. Once the cured composite completes the cure cycle and is brought to room temperature, the thermal expansion mismatch of the fibers and matrix cause additional residual stresses to form. These compounded residual stresses can compromise the reliability of the composite material and affect the composite strength. Composite process modeling is greatly complicated by the multiscale nature of the composite architecture. At the molecular level, the degree of cure controls the local shrinkage and thermal-mechanical properties of the thermoset. At the microscopic level, the local fiber architecture and packing affect the magnitudes and locations of residual stress concentrations. At the macroscopic level, the layup sequence controls the nature of crack initiation and propagation due to residual stresses. The goal of this research is use molecular dynamics (MD) and finite element analysis (FEA) to predict the residual stresses in composite laminates and the corresponding effect on composite failure. MD is used to predict the polymer shrinkage and thermomechanical properties as a function of degree of cure. This information is used as input into FEA to predict the residual stresses on the microscopic level resulting from the complete cure process. Virtual testing is subsequently conducted to predict strength allowables. Experimental characterization is used to validate the modeling.Keywords: molecular dynamics, finite element analysis, processing modeling, multiscale modeling
Procedia PDF Downloads 9416581 Production of Nanocomposite Electrical Contact Materials Ag-SnO2, W-Cu and Cu-C in Thermal Plasma
Authors: A. V. Samokhin, A. A. Fadeev, M. A. Sinaiskii, N. V. Alekseev, A. V. Kolesnikov
Abstract:
Composite materials where metal matrix is reinforced by ceramic or metal particles are of great interest for use in the manufacturing of electrical contacts. Significant improvement of the composite physical and mechanical properties as well as increase of the performance parameters of composite-based products can be achieved if the nanoscale structure in the composite materials is obtained by using nanosized powders as starting components. The results of nanosized composite powders synthesis (Ag-SnO2, W-Cu and Cu-C) in the DC thermal plasma flows are presented in this paper. The investigations included the following processes: - Recondensation of micron powder mixture Ag + SnO2 in a nitrogen plasma; - The reduction of the oxide powders mixture (WO3 + CuO) in a hydrogen-nitrogen plasma; - Decomposition of the copper formate and copper acetate powders in nitrogen plasma. The calculations of equilibrium compositions of multicomponent systems Ag-Sn-O-N, W-Cu-O-H-N and Cu-O-C-H-N in the temperature range of 400-5000 K were carried to estimate basic process characteristics. Experimental studies of the processes were performed using a plasma reactor with a confined jet flow. The plasma jet net power was in the range of 2 - 13 kW, and the feedstock flow rate was up to 0.35 kg/h. The obtained powders were characterized by TEM, HR-TEM, SEM, EDS, ED-XRF, XRD, BET and QEA methods. Nanocomposite Ag-SnO2 (12 wt. %). Processing of the initial powder mixture (Ag-SnO2) in nitrogen thermal plasma stream allowed to produce nanopowders with a specific surface area up to 24 m2/g, consisting predominantly of particles with size less than 100 nm. According to XRD results, tin was present in the obtained products as SnO2 phase, and also as intermetallic phases AgxSn. Nanocomposite W-Cu (20 wt .%). Reduction of (WO3+CuO) mixture in the hydrogen-nitrogen plasma provides W-Cu nanopowder with particle sizes in the range of 10-150 nm. The particles have mainly spherical shape and structure tungsten core - copper shell. The thickness of the shell is about several nanometers, the shell is composed of copper and its oxides (Cu2O, CuO). The nanopowders had 1.5 wt. % oxygen impurity. Heat treatment in a hydrogen atmosphere allows to reduce the oxygen content to less than 0.1 wt. %. Nanocomposite Cu-C. Copper nanopowders were found as products of the starting copper compounds decomposition. The nanopowders primarily had a spherical shape with a particle size of less than 100 nm. The main phase was copper, with small amount of Cu2O and CuO oxides. Copper formate decomposition products had a specific surface area 2.5-7 m2/g and contained 0.15 - 4 wt. % carbon; and copper acetate decomposition products had the specific surface area 5-35 m2/g, and carbon content of 0.3 - 5 wt. %. Compacting of nanocomposites (sintering in hydrogen for Ag-SnO2 and electric spark sintering (SPS) for W-Cu) showed that the samples having a relative density of 97-98 % can be obtained with a submicron structure. The studies indicate the possibility of using high-intensity plasma processes to create new technologies to produce nanocomposite materials for electric contacts.Keywords: electrical contact, material, nanocomposite, plasma, synthesis
Procedia PDF Downloads 23816580 An Enhanced Digital Forensic Model for Internet of Things Forensic
Authors: Tina Wu, Andrew Martin
Abstract:
The expansion of the Internet of Things (IoT) brings a new level of threat. Attacks on IoT are already being used by criminals to form botnets, launch Distributed Denial of Service (DDoS) and distribute malware. This opens a whole new digital forensic arena to develop forensic methodologies in order to have the capability to investigate IoT related crimes. However, existing proposed IoT forensic models are still premature requiring further improvement and validation, many lack details on the acquisition and analysis phase. This paper proposes an enhanced theoretical IoT digital forensic model focused on identifying and acquiring the main sources of evidence in a methodical way. In addition, this paper presents a theoretical acquisition framework of the different stages required in order to be capable of acquiring evidence from IoT devices.Keywords: acquisition, Internet of Things, model, zoning
Procedia PDF Downloads 27316579 Building Information Modeling Applied for the Measurement of Water Footprint of Construction Supplies
Authors: Julio Franco
Abstract:
Water is used, directly and indirectly, in all activities of the construction productive chain, making it a subject of worldwide relevance for sustainable development. The ongoing expansion of urban areas leads to a high demand for natural resources, which in turn cause significant environmental impacts. The present work proposes the application of BIM tools to assist the measurement of the water footprint (WF) of civil construction supplies. Data was inserted into the model as element properties, allowing them to be analyzed by element or in the whole model. The WF calculation was automated using parameterization in Autodesk Revit software. Parameterization was associated to the materials of each element in the model so that any changes in these elements directly alter the results of WF calculations. As a case study, we applied into a building project model to test the parameterized calculus of WF. Results show that the proposed parameterization successfully automated WF calculations according to design changes. We envision this tool to assist the measurement and rationalization of the environmental impact in terms of WF of construction projects.Keywords: building information modeling, BIM, sustainable development, water footprint
Procedia PDF Downloads 15116578 Operation Cycle Model of ASz62IR Radial Aircraft Engine
Authors: M. Duk, L. Grabowski, P. Magryta
Abstract:
Today's very important element relating to air transport is the environment impact issues. Nowadays there are no emissions standards for turbine and piston engines used in air transport. However, it should be noticed that the environmental effect in the form of exhaust gases from aircraft engines should be as small as possible. For this purpose, R&D centers often use special software to simulate and to estimate the negative effect of engine working process. For cooperation between the Lublin University of Technology and the Polish aviation company WSK "PZL-KALISZ" S.A., to achieve more effective operation of the ASz62IR engine, one of such tools have been used. The AVL Boost software allows to perform 1D simulations of combustion process of piston engines. ASz62IR is a nine-cylinder aircraft engine in a radial configuration. In order to analyze the impact of its working process on the environment, the mathematical model in the AVL Boost software have been made. This model contains, among others, model of the operation cycle of the cylinders. This model was based on a volume change in combustion chamber according to the reciprocating movement of a piston. The simplifications that all of the pistons move identically was assumed. The changes in cylinder volume during an operating cycle were specified. Those changes were important to determine the energy balance of a cylinder in an internal combustion engine which is fundamental for a model of the operating cycle. The calculations for cylinder thermodynamic state were based on the first law of thermodynamics. The change in the mass in the cylinder was calculated from the sum of inflowing and outflowing masses including: cylinder internal energy, heat from the fuel, heat losses, mass in cylinder, cylinder pressure and volume, blowdown enthalpy, evaporation heat etc. The model assumed that the amount of heat released in combustion process was calculated from the pace of combustion, using Vibe model. For gas exchange, it was also important to consider heat transfer in inlet and outlet channels because of much higher values there than for flow in a straight pipe. This results from high values of heat exchange coefficients and temperature coefficients near valves and valve seats. A Zapf modified model of heat exchange was used. To use the model with the flight scenarios, the impact of flight altitude on engine performance has been analyze. It was assumed that the pressure and temperature at the inlet and outlet correspond to the values resulting from the model for International Standard Atmosphere (ISA). Comparing this model of operation cycle with the others submodels of the ASz62IR engine, it could be noticed, that a full analysis of the performance of the engine, according to the ISA conditions, can be made. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, underKeywords: aviation propulsion, AVL Boost, engine model, operation cycle, aircraft engine
Procedia PDF Downloads 29616577 Probability-Based Damage Detection of Structures Using Model Updating with Enhanced Ideal Gas Molecular Movement Algorithm
Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee
Abstract:
Model updating method has received increasing attention in damage detection structures based on measured modal parameters. Therefore, a probability-based damage detection (PBDD) procedure based on a model updating procedure is presented in this paper, in which a one-stage model-based damage identification technique based on the dynamic features of a structure is investigated. The presented framework uses a finite element updating method with a Monte Carlo simulation that considers the uncertainty caused by measurement noise. Enhanced ideal gas molecular movement (EIGMM) is used as the main algorithm for model updating. Ideal gas molecular movement (IGMM) is a multiagent algorithm based on the ideal gas molecular movement. Ideal gas molecules disperse rapidly in different directions and cover all the space inside. This is embedded in the high speed of molecules, collisions between them and with the surrounding barriers. In IGMM algorithm to accomplish the optimal solutions, the initial population of gas molecules is randomly generated and the governing equations related to the velocity of gas molecules and collisions between those are utilized. In this paper, an enhanced version of IGMM, which removes unchanged variables after specified iterations, is developed. The proposed method is implemented on two numerical examples in the field of structural damage detection. The results show that the proposed method can perform well and competitive in PBDD of structures.Keywords: enhanced ideal gas molecular movement (EIGMM), ideal gas molecular movement (IGMM), model updating method, probability-based damage detection (PBDD), uncertainty quantification
Procedia PDF Downloads 28016576 A Comparative Asessment of Some Algorithms for Modeling and Forecasting Horizontal Displacement of Ialy Dam, Vietnam
Authors: Kien-Trinh Thi Bui, Cuong Manh Nguyen
Abstract:
In order to simulate and reproduce the operational characteristics of a dam visually, it is necessary to capture the displacement at different measurement points and analyze the observed movement data promptly to forecast the dam safety. The accuracy of forecasts is further improved by applying machine learning methods to data analysis progress. In this study, the horizontal displacement monitoring data of the Ialy hydroelectric dam was applied to machine learning algorithms: Gaussian processes, multi-layer perceptron neural networks, and the M5-rules algorithm for modelling and forecasting of horizontal displacement of the Ialy hydropower dam (Vietnam), respectively, for analysing. The database which used in this research was built by collecting time series of data from 2006 to 2021 and divided into two parts: training dataset and validating dataset. The final results show all three algorithms have high performance for both training and model validation, but the MLPs is the best model. The usability of them are further investigated by comparison with a benchmark models created by multi-linear regression. The result show the performance which obtained from all the GP model, the MLPs model and the M5-Rules model are much better, therefore these three models should be used to analyze and predict the horizontal displacement of the dam.Keywords: Gaussian processes, horizontal displacement, hydropower dam, Ialy dam, M5-Rules, multi-layer perception neural networks
Procedia PDF Downloads 21716575 A Pedagogical Study of Computational Design in a Simulated Building Information Modeling-Cloud Environment
Authors: Jaehwan Jung, Sung-Ah Kim
Abstract:
Building Information Modeling (BIM) provides project stakeholders with various information about property and geometry of entire component as a 3D object-based parametric building model. BIM represents a set of Information and solutions that are expected to improve collaborative work process and quality of the building design. To improve collaboration among project participants, the BIM model should provide the necessary information to remote participants in real time and manage the information in the process. The purpose of this paper is to propose a process model that can apply effective architectural design collaborative work process in architectural design education in BIM-Cloud environment.Keywords: BIM, cloud computing, collaborative design, digital design education
Procedia PDF Downloads 43816574 LORA: A Learning Outcome Modelling Approach for Higher Education
Authors: Aqeel Zeid, Hasna Anees, Mohamed Adheeb, Mohamed Rifan, Kalpani Manathunga
Abstract:
To achieve constructive alignment in a higher education program, a clear set of learning outcomes must be defined. Traditional learning outcome definition techniques such as Bloom’s taxonomy are not written to be utilized by the student. This might be disadvantageous for students in student-centric learning settings where the students are expected to formulate their own learning strategies. To solve the problem, we propose the learning outcome relation and aggregation (LORA) model. To achieve alignment, we developed learning outcome, assessment, and resource authoring tools which help teachers to tag learning outcomes during creation. A pilot study was conducted with an expert panel consisting of experienced professionals in the education domain to evaluate whether the LORA model and tools present an improvement over the traditional methods. The panel unanimously agreed that the model and tools are beneficial and effective. Moreover, it helped them model learning outcomes in a more student centric and descriptive way.Keywords: learning design, constructive alignment, Bloom’s taxonomy, learning outcome modelling
Procedia PDF Downloads 191