Search results for: least absolute shrinkage and selection operator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3397

Search results for: least absolute shrinkage and selection operator

2077 Modelling Biological Treatment of Dye Wastewater in SBR Systems Inoculated with Bacteria by Artificial Neural Network

Authors: Yasaman Sanayei, Alireza Bahiraie

Abstract:

This paper presents a systematic methodology based on the application of artificial neural networks for sequencing batch reactor (SBR). The SBR is a fill-and-draw biological wastewater technology, which is specially suited for nutrient removal. Employing reactive dye by Sphingomonas paucimobilis bacteria at sequence batch reactor is a novel approach of dye removal. The influent COD, MLVSS, and reaction time were selected as the process inputs and the effluent COD and BOD as the process outputs. The best possible result for the discrete pole parameter was a= 0.44. In orderto adjust the parameters of ANN, the Levenberg-Marquardt (LM) algorithm was employed. The results predicted by the model were compared to the experimental data and showed a high correlation with R2> 0.99 and a low mean absolute error (MAE). The results from this study reveal that the developed model is accurate and efficacious in predicting COD and BOD parameters of the dye-containing wastewater treated by SBR. The proposed modeling approach can be applied to other industrial wastewater treatment systems to predict effluent characteristics. Note that SBR are normally operated with constant predefined duration of the stages, thus, resulting in low efficient operation. Data obtained from the on-line electronic sensors installed in the SBR and from the control quality laboratory analysis have been used to develop the optimal architecture of two different ANN. The results have shown that the developed models can be used as efficient and cost-effective predictive tools for the system analysed.

Keywords: artificial neural network, COD removal, SBR, Sphingomonas paucimobilis

Procedia PDF Downloads 395
2076 A Design of the Organic Rankine Cycle for the Low Temperature Waste Heat

Authors: K. Fraňa, M. Müller

Abstract:

A presentation of the design of the Organic Rankine Cycle (ORC) with heat regeneration and super-heating processes is a subject of this paper. The maximum temperature level in the ORC is considered to be 110°C and the maximum pressure varies up to 2.5MPa. The selection process of the appropriate working fluids, thermal design and calculation of the cycle and its components are described. With respect to the safety, toxicity, flammability, price and thermal cycle efficiency, the working fluid selected is R134a. As a particular example, the thermal design of the condenser used for the ORC engine with a theoretical thermal power of 179 kW was introduced. The minimal heat transfer area for a completed condensation was determined to be approximately 520m2.

Keywords: organic rankine cycle, thermal efficiency, working fluids, environmental engineering

Procedia PDF Downloads 444
2075 Optimal ECG Sampling Frequency for Multiscale Entropy-Based HRV

Authors: Manjit Singh

Abstract:

Multiscale entropy (MSE) is an extensively used index to provide a general understanding of multiple complexity of physiologic mechanism of heart rate variability (HRV) that operates on a wide range of time scales. Accurate selection of electrocardiogram (ECG) sampling frequency is an essential concern for clinically significant HRV quantification; high ECG sampling rate increase memory requirements and processing time, whereas low sampling rate degrade signal quality and results in clinically misinterpreted HRV. In this work, the impact of ECG sampling frequency on MSE based HRV have been quantified. MSE measures are found to be sensitive to ECG sampling frequency and effect of sampling frequency will be a function of time scale.

Keywords: ECG (electrocardiogram), heart rate variability (HRV), multiscale entropy, sampling frequency

Procedia PDF Downloads 256
2074 Cracks Detection and Measurement Using VLP-16 LiDAR and Intel Depth Camera D435 in Real-Time

Authors: Xinwen Zhu, Xingguang Li, Sun Yi

Abstract:

Crack is one of the most common damages in buildings, bridges, roads and so on, which may pose safety hazards. However, cracks frequently happen in structures of various materials. Traditional methods of manual detection and measurement, which are known as subjective, time-consuming, and labor-intensive, are gradually unable to meet the needs of modern development. In addition, crack detection and measurement need be safe considering space limitations and danger. Intelligent crack detection has become necessary research. In this paper, an efficient method for crack detection and quantification using a 3D sensor, LiDAR, and depth camera is proposed. This method works even in a dark environment, which is usual in real-world applications. The LiDAR rapidly spins to scan the surrounding environment and discover cracks through lasers thousands of times per second, providing a rich, 3D point cloud in real-time. The LiDAR provides quite accurate depth information. The precision of the distance of each point can be determined within around  ±3 cm accuracy, and not only it is good for getting a precise distance, but it also allows us to see far of over 100m going with the top range models. But the accuracy is still large for some high precision structures of material. To make the depth of crack is much more accurate, the depth camera is in need. The cracks are scanned by the depth camera at the same time. Finally, all data from LiDAR and Depth cameras are analyzed, and the size of the cracks can be quantified successfully. The comparison shows that the minimum and mean absolute percentage error between measured and calculated width are about 2.22% and 6.27%, respectively. The experiments and results are presented in this paper.

Keywords: LiDAR, depth camera, real-time, detection and measurement

Procedia PDF Downloads 201
2073 A Spatial Approach to Model Mortality Rates

Authors: Yin-Yee Leong, Jack C. Yue, Hsin-Chung Wang

Abstract:

Human longevity has been experiencing its largest increase since the end of World War II, and modeling the mortality rates is therefore often the focus of many studies. Among all mortality models, the Lee–Carter model is the most popular approach since it is fairly easy to use and has good accuracy in predicting mortality rates (e.g., for Japan and the USA). However, empirical studies from several countries have shown that the age parameters of the Lee–Carter model are not constant in time. Many modifications of the Lee–Carter model have been proposed to deal with this problem, including adding an extra cohort effect and adding another period effect. In this study, we propose a spatial modification and use clusters to explain why the age parameters of the Lee–Carter model are not constant. In spatial analysis, clusters are areas with unusually high or low mortality rates than their neighbors, where the “location” of mortality rates is measured by age and time, that is, a 2-dimensional coordinate. We use a popular cluster detection method—Spatial scan statistics, a local statistical test based on the likelihood ratio test to evaluate where there are locations with mortality rates that cannot be described well by the Lee–Carter model. We first use computer simulation to demonstrate that the cluster effect is a possible source causing the problem of the age parameters not being constant. Next, we show that adding the cluster effect can solve the non-constant problem. We also apply the proposed approach to mortality data from Japan, France, the USA, and Taiwan. The empirical results show that our approach has better-fitting results and smaller mean absolute percentage errors than the Lee–Carter model.

Keywords: mortality improvement, Lee–Carter model, spatial statistics, cluster detection

Procedia PDF Downloads 158
2072 Energy Mutual Funds: The Behavior of Environmental, Social and Governance Funds

Authors: Anna Paola Micheli, Anna Maria Calce, Loris Di Nallo

Abstract:

Sustainable finance identifies the process that leads, in the adoption of investment decisions, to take into account environmental and social factors, with the aim of orienting investments towards sustainable and long-term activities. Considering that the topic is at the center of the interest of national agendas, long-term investments will no longer be analyzed only by looking at financial data, but environmental, social, and governance (ESG) factors will be increasingly important and will play a fundamental role in determining the risk and return of an investment. Although this perspective does not deny the orientation to profit, ESG mutual funds represent sustainable finance applied to the world of mutual funds. So the goal of this paper is to verify this attitude, in particular in the energy sector. The choice of the sector is not casual: ESG is the acronym for environmental, social, and governance, and energy companies are strictly related to the environmental theme. The methodology adopted leads to a comparison between a sample of ESG funds and a sample of ESG funds with similar characteristics, using the most important indicators of literature: yield, standard deviation, and Sharpe index. The analysis is focused on equity funds. Results that are partial, due to the lack of historicity, show a good performance of ESG funds, testifying how a sustainable approach does not necessarily mean lower profits. It is clear that these first findings do not involve an absolute preference for ESG funds in terms of performance because the persistence of results is requested. Furthermore, these findings are to be verified in other sectors and in bond funds.

Keywords: mutual funds, ESG, performance, energy

Procedia PDF Downloads 93
2071 Varietal Behavior of Some Chickpea Genotypes to Wilt Disease Induced by Fusarium oxysporum f.sp. ciceris

Authors: Rouag N., Khalifa M. W., Bencheikh A., Abed H.

Abstract:

The behavior study of forty-two varieties and genotypes of chickpeas regarding root wilt disease induced by Fusarium oxysporum under the natural conditions of infection was conducted at the ITGC experimental station in Sétif. The infected plants of the different chickpea genotypes have shown multiple symptoms in the field caused by the local strain of Fusarium oxysporum f.sp.cecris belonging to race II of the pathogen. These symptoms ranged from lateral or partial wilting of some ramifications to total desiccation of the plant, sometimes combined with the very slow growth of symptomatic plants. The results of the search for sources of resistance to Fusarium wilt of chickpeas in the 42 genotypes tested revealed that in terms of infection rate, the presence of 7 groups and no genotype showed absolute resistance. While in terms of severity, the results revealed the presence of three homogeneous groups. The first group formed by the most resistant genotypes, in this case, Flip10-368C; Flip11-77C; Flip11-186C; Flip11-124C; Flip11-142C, Flip11-152C; Flip11-69C; Ghab 05; Flip11-159C; Flip11-90C; Flip10-357C and Flip11-37C while the second group is the FLIP genotype 10-382C which was found to be the most sensitive for the natural infection test. Thus, the genotypes of Cicer arietinum L., which have shown significant levels of resistance to Fusarium wilt, can be integrated into breeding and improvement programs.

Keywords: chickpea, Cicer arietinum, Fusarium oxysporum, genotype resistance

Procedia PDF Downloads 68
2070 Prediction of PM₂.₅ Concentration in Ulaanbaatar with Deep Learning Models

Authors: Suriya

Abstract:

Rapid socio-economic development and urbanization have led to an increasingly serious air pollution problem in Ulaanbaatar (UB), the capital of Mongolia. PM₂.₅ pollution has become the most pressing aspect of UB air pollution. Therefore, monitoring and predicting PM₂.₅ concentration in UB is of great significance for the health of the local people and environmental management. As of yet, very few studies have used models to predict PM₂.₅ concentrations in UB. Using data from 0:00 on June 1, 2018, to 23:00 on April 30, 2020, we proposed two deep learning models based on Bayesian-optimized LSTM (Bayes-LSTM) and CNN-LSTM. We utilized hourly observed data, including Himawari8 (H8) aerosol optical depth (AOD), meteorology, and PM₂.₅ concentration, as input for the prediction of PM₂.₅ concentrations. The correlation strengths between meteorology, AOD, and PM₂.₅ were analyzed using the gray correlation analysis method; the comparison of the performance improvement of the model by using the AOD input value was tested, and the performance of these models was evaluated using mean absolute error (MAE) and root mean square error (RMSE). The prediction accuracies of Bayes-LSTM and CNN-LSTM deep learning models were both improved when AOD was included as an input parameter. Improvement of the prediction accuracy of the CNN-LSTM model was particularly enhanced in the non-heating season; in the heating season, the prediction accuracy of the Bayes-LSTM model slightly improved, while the prediction accuracy of the CNN-LSTM model slightly decreased. We propose two novel deep learning models for PM₂.₅ concentration prediction in UB, Bayes-LSTM, and CNN-LSTM deep learning models. Pioneering the use of AOD data from H8 and demonstrating the inclusion of AOD input data improves the performance of our two proposed deep learning models.

Keywords: deep learning, AOD, PM2.5, prediction, Ulaanbaatar

Procedia PDF Downloads 32
2069 Cosmetic Recommendation Approach Using Machine Learning

Authors: Shakila N. Senarath, Dinesh Asanka, Janaka Wijayanayake

Abstract:

The necessity of cosmetic products is arising to fulfill consumer needs of personality appearance and hygiene. A cosmetic product consists of various chemical ingredients which may help to keep the skin healthy or may lead to damages. Every chemical ingredient in a cosmetic product does not perform on every human. The most appropriate way to select a healthy cosmetic product is to identify the texture of the body first and select the most suitable product with safe ingredients. Therefore, the selection process of cosmetic products is complicated. Consumer surveys have shown most of the time, the selection process of cosmetic products is done in an improper way by consumers. From this study, a content-based system is suggested that recommends cosmetic products for the human factors. To such an extent, the skin type, gender and price range will be considered as human factors. The proposed system will be implemented by using Machine Learning. Consumer skin type, gender and price range will be taken as inputs to the system. The skin type of consumer will be derived by using the Baumann Skin Type Questionnaire, which is a value-based approach that includes several numbers of questions to derive the user’s skin type to one of the 16 skin types according to the Bauman Skin Type indicator (BSTI). Two datasets are collected for further research proceedings. The user data set was collected using a questionnaire given to the public. Those are the user dataset and the cosmetic dataset. Product details are included in the cosmetic dataset, which belongs to 5 different kinds of product categories (Moisturizer, Cleanser, Sun protector, Face Mask, Eye Cream). An alternate approach of TF-IDF (Term Frequency – Inverse Document Frequency) is applied to vectorize cosmetic ingredients in the generic cosmetic products dataset and user-preferred dataset. Using the IF-IPF vectors, each user-preferred products dataset and generic cosmetic products dataset can be represented as sparse vectors. The similarity between each user-preferred product and generic cosmetic product will be calculated using the cosine similarity method. For the recommendation process, a similarity matrix can be used. Higher the similarity, higher the match for consumer. Sorting a user column from similarity matrix in a descending order, the recommended products can be retrieved in ascending order. Even though results return a list of similar products, and since the user information has been gathered, such as gender and the price ranges for product purchasing, further optimization can be done by considering and giving weights for those parameters once after a set of recommended products for a user has been retrieved.

Keywords: content-based filtering, cosmetics, machine learning, recommendation system

Procedia PDF Downloads 120
2068 Activation of Spermidine/Spermine N1-Acetyltransferase 1 (SSAT-1) as Biomarker in Breast Cancer

Authors: Rubina Ghani, Sehrish Zia, Afifa Fatima Rafique, Shaista Emad

Abstract:

Background: Cancer is a leading cause of death worldwide, with breast cancer being the most common cancer in women. Pakistan has the highest rate of breast cancer cases among Asian countries. Early and accurate diagnosis is crucial for treatment outcomes and quality of life. Method: It is a case-control study with a sample size of 150. There were 100 suspected cancer cases, 25 healthy controls, and 25 diagnosed cancer cases. To analyze SSAT-1 mRNA expression in whole blood, Zymo Research Quick-RNA Miniprep and Innu SCRIPT—One Step RT-PCR Syber Green kits were used. Patients were divided into three groups: 100 suspected cancer cases, 25 controls, and 25 confirmed breast cancer cases. Result: The total mRNA was isolated, and the expression of SSAT-1 was measured using RT-qPCR. The threshold cycle (Ct) values were used to determine the amount of each mRNA. Ct values were then calculated by taking the difference between the CtSSAT-1 and Ct GAPDH, and further Ct values were calculated with the median absolute deviation for all the samples within the same experimental group. Samples that did not correlate with the results were taken as outliers and excluded from the analysis. The relative fold change is shown as 2^-Ct values. Suspected cases showed a maximum fold change of 32.24, with a control fold change of 1.31. Conclusion: The study reveals an overexpression of SSAT-1 in breast cancer. Furthermore, we can use SSAT-1 as a diagnostic, prognostic, and therapeutic marker for early diagnosis of cancer.

Keywords: breast cancer, spermidine/spermine, qPCR, mRNA

Procedia PDF Downloads 11
2067 Enhanced Magnetoelastic Response near Morphotropic Phase Boundary in Ferromagnetic Materials: Experimental and Theoretical Analysis

Authors: Murtaza Adil, Sen Yang, Zhou Chao, Song Xiaoping

Abstract:

The morphotropic phase boundary (MPB) recently has attracted constant interest in ferromagnetic systems for obtaining enhanced large magnetoelastic response. In the present study, structural and magnetoelastic properties of MPB involved ferromagnetic Tb1-xGdxFe2 (0≤x≤1) system has been investigated. The change of easy magnetic direction from <111> to <100> with increasing x up MPB composition of x=0.9 is detected by step-scanned [440] synchrotron X-ray diffraction reflections. The Gd substitution for Tb changes the composition for the anisotropy compensation near MPB composition of x=0.9, which was confirmed by the analysis of detailed scanned XRD, magnetization curves and the calculation of the first anisotropy constant K1. The spin configuration diagram accompanied with different crystal structures for Tb1-xGdxFe2 was designed. The calculated first anisotropy constant K1 shows a minimum value at MPB composition of x=0.9. In addition, the large ratio between magnetostriction, and the absolute values of the first anisotropy constant │λS∕K1│ appears at MPB composition, which makes it a potential material for magnetostrictive application. Based on experimental results, a theoretically approach was also proposed to signify that the facilitated magnetization rotation and enhanced magnetoelastic effect near MPB composition are a consequence of the anisotropic flattening of free energy of ferromagnetic crystal. Our work specifies the universal existence of MPB in ferromagnetic materials which is important for substantial improvement of magnetic and magnetostrictive properties and may provide a new route to develop advanced functional materials.

Keywords: free energy, magnetic anisotropy, magnetostriction, morphotropic phase boundary (MPB)

Procedia PDF Downloads 265
2066 Establishment of a Nomogram Prediction Model for Postpartum Hemorrhage during Vaginal Delivery

Authors: Yinglisong, Jingge Chen, Jingxuan Chen, Yan Wang, Hui Huang, Jing Zhnag, Qianqian Zhang, Zhenzhen Zhang, Ji Zhang

Abstract:

Purpose: The study aims to establish a nomogram prediction model for postpartum hemorrhage (PPH) in vaginal delivery. Patients and Methods: Clinical data were retrospectively collected from vaginal delivery patients admitted to a hospital in Zhengzhou, China, from June 1, 2022 - October 31, 2022. Univariate and multivariate logistic regression were used to filter out independent risk factors. A nomogram model was established for PPH in vaginal delivery based on the risk factors coefficient. Bootstrapping was used for internal validation. To assess discrimination and calibration, receiver operator characteristics (ROC) and calibration curves were generated in the derivation and validation groups. Results: A total of 1340 cases of vaginal delivery were enrolled, with 81 (6.04%) having PPH. Logistic regression indicated that history of uterine surgery, induction of labor, duration of first labor, neonatal weight, WBC value (during the first stage of labor), and cervical lacerations were all independent risk factors of hemorrhage (P <0.05). The area-under-curve (AUC) of ROC curves of the derivation group and the validation group were 0.817 and 0.821, respectively, indicating good discrimination. Two calibration curves showed that nomogram prediction and practical results were highly consistent (P = 0.105, P = 0.113). Conclusion: The developed individualized risk prediction nomogram model can assist midwives in recognizing and diagnosing high-risk groups of PPH and initiating early warning to reduce PPH incidence.

Keywords: vaginal delivery, postpartum hemorrhage, risk factor, nomogram

Procedia PDF Downloads 57
2065 Adaptations to Hamilton's Rule in Human Populations

Authors: Monty Vacura

Abstract:

Hamilton’s Rule is a universal law of biology expressed in protists, plants and animals. When applied to human populations, this model explains: 1) Origin of religion in society as a biopsychological need selected to increase population size; 2) Instincts of racism expressed through intergroup competition; 3) Simultaneous selection for human cooperation and conflict, love and hate; 4) Connection between sporting events and instinctive social messaging for stimulating offensive and defensive responses; 5) Pathway to reduce human sacrifice. This chapter discusses the deep psychological influences of Hamilton’s Rule. Suggestions are provided to reduce human deaths via our instinctive sacrificial behavior, by consciously monitoring Hamilton’s Rule variables highlighted throughout our media outlets.

Keywords: psychology, Hamilton’s rule, evolution, human instincts

Procedia PDF Downloads 41
2064 A Study of Cloud Computing Solution for Transportation Big Data Processing

Authors: Ilgin Gökaşar, Saman Ghaffarian

Abstract:

The need for fast processed big data of transportation ridership (eg., smartcard data) and traffic operation (e.g., traffic detectors data) which requires a lot of computational power is incontrovertible in Intelligent Transportation Systems. Nowadays cloud computing is one of the important subjects and popular information technology solution for data processing. It enables users to process enormous measure of data without having their own particular computing power. Thus, it can also be a good selection for transportation big data processing as well. This paper intends to examine how the cloud computing can enhance transportation big data process with contrasting its advantages and disadvantages, and discussing cloud computing features.

Keywords: big data, cloud computing, Intelligent Transportation Systems, ITS, traffic data processing

Procedia PDF Downloads 445
2063 Application of Machine Learning Techniques in Forest Cover-Type Prediction

Authors: Saba Ebrahimi, Hedieh Ashrafi

Abstract:

Predicting the cover type of forests is a challenge for natural resource managers. In this project, we aim to perform a comprehensive comparative study of two well-known classification methods, support vector machine (SVM) and decision tree (DT). The comparison is first performed among different types of each classifier, and then the best of each classifier will be compared by considering different evaluation metrics. The effect of boosting and bagging for decision trees is also explored. Furthermore, the effect of principal component analysis (PCA) and feature selection is also investigated. During the project, the forest cover-type dataset from the remote sensing and GIS program is used in all computations.

Keywords: classification methods, support vector machine, decision tree, forest cover-type dataset

Procedia PDF Downloads 196
2062 A Flexible Pareto Distribution Using α-Power Transformation

Authors: Shumaila Ehtisham

Abstract:

In Statistical Distribution Theory, considering an additional parameter to classical distributions is a usual practice. In this study, a new distribution referred to as α-Power Pareto distribution is introduced by including an extra parameter. Several properties of the proposed distribution including explicit expressions for the moment generating function, mode, quantiles, entropies and order statistics are obtained. Unknown parameters have been estimated by using maximum likelihood estimation technique. Two real datasets have been considered to examine the usefulness of the proposed distribution. It has been observed that α-Power Pareto distribution outperforms while compared to different variants of Pareto distribution on the basis of model selection criteria.

Keywords: α-power transformation, maximum likelihood estimation, moment generating function, Pareto distribution

Procedia PDF Downloads 205
2061 A Method for Quantitative Assessment of the Dependencies between Input Signals and Output Indicators in Production Systems

Authors: Maciej Zaręba, Sławomir Lasota

Abstract:

Knowing the degree of dependencies between the sets of input signals and selected sets of indicators that measure a production system's effectiveness is of great importance in the industry. This paper introduces the SELM method that enables the selection of sets of input signals, which affects the most the selected subset of indicators that measures the effectiveness of a production system. For defined set of output indicators, the method quantifies the impact of input signals that are gathered in the continuous monitoring production system.

Keywords: manufacturing operation management, signal relationship, continuous monitoring, production systems

Procedia PDF Downloads 98
2060 Investigations on Geopolymer Concrete Slabs

Authors: Akhila Jose

Abstract:

The cement industry is one of the major contributors to the global warming due to the release of greenhouse gases. The primary binder in conventional concrete is Ordinary Portland cement (OPC) and billions of tons are produced annually all over the world. An alternative binding material to OPC is needed to reduce the environmental impact caused during the cement manufacturing process. Geopolymer concrete is an ideal material to substitute cement-based binder. Geopolymer is an inorganic alumino-silicate polymer. Geopolymer Concrete (GPC) is formed by the polymerization of aluminates and silicates formed by the reaction of solid aluminosilicates with alkali hydroxides or alkali silicates. Various Industrial bye- products like Fly Ash (FA), Rice Husk Ash (RHA), Ground granulated Blast Furnace Slag (GGBFS), Silica Fume (SF), Red mud (RM) etc. are rich in aluminates and silicates. Using by-products from other industries reduces the carbon dioxide emission and thus giving a sustainable way of reducing greenhouse gas emissions and also a way to dispose the huge wastes generated from the major industries like thermal plants, steel plants, etc. The earlier research about geopolymer were focused on heat cured fly ash based precast members and this limited its applications. The heat curing mechanism itself is highly cumbersome and costly even though they possess high compressive strength, low drying shrinkage and creep, and good resistance to sulphate and acid environments. GPC having comparable strength and durability characteristics of OPC were able to develop under ambient cured conditions is the solution making it a sustainable alternative in future. In this paper an attempt has been made to review and compare the feasibility of ambient cured GPC over heat cured geopolymer concrete with respect to strength and serviceability characteristics. The variation on the behavior of structural members is also reviewed to identify the research gaps for future development of ambient cured geopolymer concrete. The comparison and analysis of studies showed that GPC most importantly ambient cured type has a comparable behavior with respect to OPC based concrete in terms strength and durability criteria.

Keywords: geopolymer concrete, oven heated, durability properties, mechanical properties

Procedia PDF Downloads 167
2059 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 128
2058 Cooling of Exhaust Gases Emitted Into the Atmosphere as the Possibility to Reduce the Helicopter Radiation Emission Level

Authors: Mateusz Paszko, Mirosław Wendeker, Adam Majczak

Abstract:

Every material body that temperature is higher than 0K (absolute zero) emits infrared radiation to the surroundings. Infrared radiation is highly meaningful in military aviation, especially in military applications of helicopters. Helicopters, in comparison to other aircraft, have much lower flight speeds and maneuverability, which makes them easy targets for actual combat assets like infrared-guided missiles. When designing new helicopter types, especially for combat applications, it is essential to pay enormous attention to infrared emissions of the solid parts composing the helicopter’s structure, as well as to exhaust gases egressing from the engine’s exhaust system. Due to their high temperature, exhaust gases, egressed to the surroundings are a major factor in infrared radiation emission and, in consequence, detectability of a helicopter performing air combat operations. Protection of the helicopter in flight from early detection, tracking and finally destruction can be realized in many ways. This paper presents the analysis of possibilities to decrease the infrared radiation level that is emitted to the environment by helicopter in flight, by cooling exhaust in special ejection-based coolers. The paper also presents the concept 3D model and results of numeric analysis of ejective-based cooler cooperation with PA-10W turbine engine. Numeric analysis presented promising results in decreasing the infrared emission level by PA W-3 helicopter in flight.

Keywords: exhaust cooler, helicopter propulsion, infrared radiation, stealth

Procedia PDF Downloads 333
2057 Problems Confronting the Teaching of Sex Education in Some Selected Secondary Schools in the Akoko Region of Ondo State, Nigeria

Authors: Jimoh Abiodun Alaba

Abstract:

Context: In many traditional African societies, sex education is often considered a taboo topic. However, the importance of sex education is becoming increasingly evident. This study aims to investigate the challenges faced in teaching sex education in selected secondary schools in the Akoko region of Ondo state, Nigeria. Research Aim: The aim of this study is to identify and examine the problems confronting the teaching of sex education in selected secondary schools in the Akoko region of Ondo state, Nigeria. Methodology: The study utilized a multi-stage sampling method. The first stage involved a purposive selection of ten (10) secondary schools in the Akoko region of Ondo State, while the second stage was a random selection of twenty (20) students, each in the selected secondary schools of the study area. This makes a total of two (200) hundred students that were considered for the survey. Descriptive analysis using percentages was employed to analyze the collected data. Factor analysis was also used to identify the most significant problems. Findings: The study revealed that sex education has been neglected in the sampled secondary schools due to traditional African beliefs that do not support the teaching and learning of this subject. Furthermore, there was evidence to suggest that parents also displayed reluctance towards the teaching of sex education, fearing that it might expose students to inappropriate behavior. Consequently, students were deprived of this essential aspect of education necessary for self-awareness and development. Theoretical Importance: This study contributes to the understanding of the challenges faced in teaching sex education in traditional African societies, specifically in the selected secondary schools in the Akoko region of Ondo state, Nigeria. Data Collection: Data were collected through the administration of 200 questionnaires in ten selected secondary schools. Additionally, information was gathered from federal, state, and local government authorities. Analysis Procedures: The collected data were analyzed using descriptive analysis, employing percentage calculations for better interpretation. Furthermore, factor analysis was conducted to isolate the most significant problems identified. Conclusion: The study concludes that sex education in the sampled secondary schools in the Akoko region of Ondo state, Nigeria, has suffered neglect due to traditional African beliefs and parental concerns. Consequently, students are denied an important aspect of education necessary for their self-awareness and development. Recommendations are made to change the negative perception of sex education, enrich the curriculum, and employ qualified personnel for its teaching. Additionally, it is suggested that sex education should be integrated with moral instruction.

Keywords: African traditional belief, sex, sex education, sexual misdemeanor, morality

Procedia PDF Downloads 71
2056 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 34
2055 Study of Skid-Mounted Natural Gas Treatment Process

Authors: Di Han, Lingfeng Li

Abstract:

Selection of low-temperature separation dehydration and dehydrochlorination process applicable to skid design, using Hysys software to simulate the low-temperature separation dehydration and dehydrochlorination process under different refrigeration modes, focusing on comparing the refrigeration effect of different refrigeration modes, the condensation amount of hydrocarbon liquids and alcoholic wastewater, as well as the adaptability of the process, and determining the low-temperature separation process applicable to the natural gas dehydration and dehydrochlorination skid into the design of skid; and finally, to carry out the CNG recycling process calculations of the processed qualified natural gas and to determine the dehydration scheme and the key parameters of the compression process.

Keywords: skidding, dehydration and dehydrochlorination, cryogenic separation process, CNG recovery process calculations

Procedia PDF Downloads 128
2054 Utilization Of Guar Gum As Functional Fat Replacer In Goshtaba, A Traditional Indian Meat Product

Authors: Sajad A. Rather, F. A. Masoodi, Rehana Akhter, S. M. Wani, Adil Gani

Abstract:

Modern trend towards convenience foods has resulted in increased production and consumption of restructured meat products and are of great importance to the meat industry. In meat products fat plays an important role in cooking properties, texture & sensory scores, however, high fat contents in particular animal fats provide high amounts of saturated fatty acids and cholesterol and are associated with several types of non communicable diseases such as obesity, hypertension and coronary heart diseases. Thus, fat reduction has generally been seen as an important strategy to produce healthier meat products. This study examined the effects of reducing fat level from 20% to 10% and substituting mutton back fat with guar gum (0.5%, 1% & 1.5%) on cooking properties, proximate composition, lipid and protein oxidation, texture, microstructure and sensory characteristics of goshtaba- a traditional meat product of J & K, India were investigated and compared with high fat counterparts. Reduced- fat goshtaba samples containing guar gum had significantly (p ≤ 0.05) higher yield, less shrinkage, more moisture retention and more protein content than the control sample. TBARs and protein oxidation (carbonyl content) values of the control was significantly (p ≤ 0.05) higher than reduced fat goshtaba samples and showed a positive correlation between lipid and protein oxidation. Hardness, gumminess & chewiness of the control (20%) were significantly higher than reduced fat goshtaba samples. Microstructural differences were significant (p ≤ 0.05) between control and treated samples due to an increased moisture content in the reduced fat samples. Sensory evaluation showed significant (p ≤ 0.05) reduction in texture, flavour and overall acceptability scores of treatment products; however the scores for 0.5% and 1% treated samples were in the range of acceptability. Guar gum may also be used as a source of soluble dietary fibre in food products and a number of clinical studies have shown a reduction in postprandial glycemia and insulinemia on consumption of guar gum, with the mechanism being attributed to an increased transit time in the stomach and small intestine, which may have been due to the viscosity of the meal hindering the access of glucose to the epithelium.

Keywords: goshtaba, guar gum, traditional, fat reduction, acceptability

Procedia PDF Downloads 263
2053 Gender and Science: Is the Association Universal?

Authors: Neelam Kumar

Abstract:

Science is stratified, with an unequal distribution of research facilities and rewards among scientists. Gender stratification is one of the most prevalent phenomena in the world of science. In most countries gender segregation, horizontal as well as vertical, stands out in the field of science and engineering. India is no exception. This paper aims to examine: (1) gender and science associations, historical as well as contemporary, (2) women’s enrolment and gender differences in selection of academic fields, (2) women as professional researchers, (3) career path and recognition/trajectories. The paper reveals that in recent years the gender–science relationship has changed, but is not totally free from biases. Women’s enrolment into various science disciplines has shown remarkable and steady increase in most parts of the world, including India, yet they remain underrepresented in the S&T workforce, although to a lesser degree than in the past.

Keywords: gender, science, universal, women

Procedia PDF Downloads 292
2052 GIS Pavement Maintenance Selection Strategy

Authors: Mekdelawit Teferi Alamirew

Abstract:

As a practical tool, the Geographical information system (GIS) was used for data integration, collection, management, analysis, and output presentation in pavement mangement systems . There are many GIS techniques to improve the maintenance activities like Dynamic segmentation and weighted overlay analysis which considers Multi Criteria Decision Making process. The results indicated that the developed MPI model works sufficiently and yields adequate output for providing accurate decisions. Hence considering multi criteria to prioritize the pavement sections for maintenance, as a result of the fact that GIS maps can express position, extent, and severity of pavement distress features more effectively than manual approaches, lastly the paper also offers digitized distress maps that can help agencies in their decision-making processes.

Keywords: pavement, flexible, maintenance, index

Procedia PDF Downloads 46
2051 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning

Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz

Abstract:

Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.

Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics

Procedia PDF Downloads 98
2050 Reasons for the Selection of Information-Processing Framework and the Philosophy of Mind as a General Account for an Error Analysis and Explanation on Mathematics

Authors: Michael Lousis

Abstract:

This research study is concerned with learner’s errors on Arithmetic and Algebra. The data resulted from a broader international comparative research program called Kassel Project. However, its conceptualisation differed from and contrasted with that of the main program, which was mostly based on socio-demographic data. The way in which the research study was conducted, was not dependent on the researcher’s discretion, but was absolutely dictated by the nature of the problem under investigation. This is because the phenomenon of learners’ mathematical errors is due neither to the intentions of learners nor to institutional processes, rules and norms, nor to the educators’ intentions and goals; but rather to the way certain information is presented to learners and how their cognitive apparatus processes this information. Several approaches for the study of learners’ errors have been developed from the beginning of the 20th century, encompassing different belief systems. These approaches were based on the behaviourist theory, on the Piagetian- constructivist research framework, the perspective that followed the philosophy of science and the information-processing paradigm. The researcher of the present study was forced to disclose the learners’ course of thinking that led them in specific observable actions with the result of showing particular errors in specific problems, rather than analysing scripts with the students’ thoughts presented in a written form. This, in turn, entailed that the choice of methods would have to be appropriate and conducive to seeing and realising the learners’ errors from the perspective of the participants in the investigation. This particular fact determined important decisions to be made concerning the selection of an appropriate framework for analysing the mathematical errors and giving explanations. Thus the rejection of the belief systems concerning behaviourism, the Piagetian-constructivist, and philosophy of science perspectives took place, and the information-processing paradigm in conjunction with the philosophy of mind were adopted as a general account for the elaboration of data. This paper explains why these decisions were appropriate and beneficial for conducting the present study and for the establishment of the ensued thesis. Additionally, the reasons for the adoption of the information-processing paradigm in conjunction with the philosophy of mind give sound and legitimate bases for the development of future studies concerning mathematical error analysis are explained.

Keywords: advantages-disadvantages of theoretical prospects, behavioral prospect, critical evaluation of theoretical prospects, error analysis, information-processing paradigm, opting for the appropriate approach, philosophy of science prospect, Piagetian-constructivist research frameworks, review of research in mathematical errors

Procedia PDF Downloads 178
2049 Using Neural Networks for Click Prediction of Sponsored Search

Authors: Afroze Ibrahim Baqapuri, Ilya Trofimov

Abstract:

Sponsored search is a multi-billion dollar industry and makes up a major source of revenue for search engines (SE). Click-through-rate (CTR) estimation plays a crucial role for ads selection, and greatly affects the SE revenue, advertiser traffic and user experience. We propose a novel architecture of solving CTR prediction problem by combining artificial neural networks (ANN) with decision trees. First, we compare ANN with respect to other popular machine learning models being used for this task. Then we go on to combine ANN with MatrixNet (proprietary implementation of boosted trees) and evaluate the performance of the system as a whole. The results show that our approach provides a significant improvement over existing models.

Keywords: neural networks, sponsored search, web advertisement, click prediction, click-through rate

Procedia PDF Downloads 556
2048 The Current Development and Legislation on the Acquisition and Use of Nuclear Energy in Contemporary International Law

Authors: Uche A. Nnawulezi

Abstract:

Over the past decades, the acquisition and utilization of nuclear energy have remained a standout amongst the most intractable issues which past world leaders have unsuccessfully endeavored to grapple with. This study analyzes the present advancement and enactment on the acquisition and utilization of nuclear energy in contemporary international law. It seeks to address international co-operations in the field of nuclear energy by looking at what nuclear energy is all about and how it came into being. It also seeks to address concerns expressed by a few researchers on the position of nuclear law in the most extensive domain of the law by looking at the authoritative procedure for nuclear law, system of arrangements and traditions. This study also agrees in favour of treaty on non-proliferation of nuclear weapons based on human right and humanitarian principles that are not duly moral, but also legal ones. Specifically, the past development activities on nuclear weapon and the practical system of the nuclear energy institute will be inspected. The study noted among others, former president Obama's remark on nuclear energy and Pakistan nuclear policies and its attendant outcomes. Essentially, we depended on documentary evidence and henceforth scooped a great part of the data from secondary sources. The study emphatically advocates for the adoption of absolute liability principles and setting up of a viability trust fund, all of which will help in sustaining global peace where global best practices in acquisition and use of nuclear energy will be widely accepted in the contemporary international law. Essentially, the fundamental proposals made in this paper if completely adopted, might go far in fortifying the present advancement and enactment on the application and utilization of nuclear energy and accordingly, addressing a portion of the intractable issues under international law.

Keywords: nuclear energy, international law, acquisition, development

Procedia PDF Downloads 161