Search results for: distance decay gradient
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2958

Search results for: distance decay gradient

2568 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach

Authors: James Ladzekpo

Abstract:

Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.

Keywords: diabetes, machine learning, prediction, biomarkers

Procedia PDF Downloads 28
2567 Active Contours for Image Segmentation Based on Complex Domain Approach

Authors: Sajid Hussain

Abstract:

The complex domain approach for image segmentation based on active contour has been designed, which deforms step by step to partition an image into numerous expedient regions. A novel region-based trigonometric complex pressure force function is proposed, which propagates around the region of interest using image forces. The signed trigonometric force function controls the propagation of the active contour and the active contour stops on the exact edges of the object accurately. The proposed model makes the level set function binary and uses Gaussian smoothing kernel to adjust and escape the re-initialization procedure. The working principle of the proposed model is as follows: The real image data is transformed into complex data by iota (i) times of image data and the average iota (i) times of horizontal and vertical components of the gradient of image data is inserted in the proposed model to catch complex gradient of the image data. A simple finite difference mathematical technique has been used to implement the proposed model. The efficiency and robustness of the proposed model have been verified and compared with other state-of-the-art models.

Keywords: image segmentation, active contour, level set, Mumford and Shah model

Procedia PDF Downloads 76
2566 Investigation of Extreme Gradient Boosting Model Prediction of Soil Strain-Shear Modulus

Authors: Ehsan Mehryaar, Reza Bushehri

Abstract:

One of the principal parameters defining the clay soil dynamic response is the strain-shear modulus relation. Predicting the strain and, subsequently, shear modulus reduction of the soil is essential for performance analysis of structures exposed to earthquake and dynamic loadings. Many soil properties affect soil’s dynamic behavior. In order to capture those effects, in this study, a database containing 1193 data points consists of maximum shear modulus, strain, moisture content, initial void ratio, plastic limit, liquid limit, initial confining pressure resulting from dynamic laboratory testing of 21 clays is collected for predicting the shear modulus vs. strain curve of soil. A model based on an extreme gradient boosting technique is proposed. A tree-structured parzan estimator hyper-parameter tuning algorithm is utilized simultaneously to find the best hyper-parameters for the model. The performance of the model is compared to the existing empirical equations using the coefficient of correlation and root mean square error.

Keywords: XGBoost, hyper-parameter tuning, soil shear modulus, dynamic response

Procedia PDF Downloads 170
2565 A Comparative Study of Twin Delayed Deep Deterministic Policy Gradient and Soft Actor-Critic Algorithms for Robot Exploration and Navigation in Unseen Environments

Authors: Romisaa Ali

Abstract:

This paper presents a comparison between twin-delayed Deep Deterministic Policy Gradient (TD3) and Soft Actor-Critic (SAC) reinforcement learning algorithms in the context of training robust navigation policies for Jackal robots. By leveraging an open-source framework and custom motion control environments, the study evaluates the performance, robustness, and transferability of the trained policies across a range of scenarios. The primary focus of the experiments is to assess the training process, the adaptability of the algorithms, and the robot’s ability to navigate in previously unseen environments. Moreover, the paper examines the influence of varying environmental complexities on the learning process and the generalization capabilities of the resulting policies. The results of this study aim to inform and guide the development of more efficient and practical reinforcement learning-based navigation policies for Jackal robots in real-world scenarios.

Keywords: Jackal robot environments, reinforcement learning, TD3, SAC, robust navigation, transferability, custom environment

Procedia PDF Downloads 63
2564 AI-based Radio Resource and Transmission Opportunity Allocation for 5G-V2X HetNets: NR and NR-U Networks

Authors: Farshad Zeinali, Sajedeh Norouzi, Nader Mokari, Eduard Jorswieck

Abstract:

The capacity of fifth-generation (5G) vehicle-to-everything (V2X) networks poses significant challenges. To ad- dress this challenge, this paper utilizes New Radio (NR) and New Radio Unlicensed (NR-U) networks to develop a heterogeneous vehicular network (HetNet). We propose a new framework, named joint BS assignment and resource allocation (JBSRA) for mobile V2X users and also consider coexistence schemes based on flexible duty cycle (DC) mechanism for unlicensed bands. Our objective is to maximize the average throughput of vehicles while guaranteeing the WiFi users' throughput. In simulations based on deep reinforcement learning (DRL) algorithms such as deep deterministic policy gradient (DDPG) and deep Q network (DQN), our proposed framework outperforms existing solutions that rely on fixed DC or schemes without consideration of unlicensed bands.

Keywords: vehicle-to-everything (V2X), resource allocation, BS assignment, new radio (NR), new radio unlicensed (NR-U), coexistence NR-U and WiFi, deep deterministic policy gradient (DDPG), deep Q-network (DQN), joint BS assignment and resource allocation (JBSRA), duty cycle mechanism

Procedia PDF Downloads 72
2563 Satisfaction of Distance Education University Students with the Use of Audio Media as a Medium of Instruction: The Case of Mountains of the Moon University in Uganda

Authors: Mark Kaahwa, Chang Zhu, Moses Muhumuza

Abstract:

This study investigates the satisfaction of distance education university students (DEUS) with the use of audio media as a medium of instruction. Studying students’ satisfaction is vital because it shows whether learners are comfortable with a certain instructional strategy or not. Although previous studies have investigated the use of audio media, the satisfaction of students with an instructional strategy that combines radio teaching and podcasts as an independent teaching strategy has not been fully investigated. In this study, all lectures were delivered through the radio and students had no direct contact with their instructors. No modules or any other material in form of text were given to the students. They instead, revised the taught content by listening to podcasts saved on their mobile electronic gadgets. Prior to data collection, DEUS received orientation through workshops on how to use audio media in distance education. To achieve objectives of the study, a survey, naturalistic observations and face-to-face interviews were used to collect data from a sample of 211 undergraduate and graduate students. Findings indicate that there was no statistically significant difference in the levels of satisfaction between male and female students. The results from post hoc analysis show that there is a statistically significant difference in the levels of satisfaction regarding the use of audio media between diploma and graduate students. Diploma students are more satisfied compared to their graduate counterparts. T-test results reveal that there was no statistically significant difference in the general satisfaction with audio media between rural and urban-based students. And ANOVA results indicate that there is no statistically significant difference in the levels of satisfaction with the use of audio media across age groups. Furthermore, results from observations and interviews reveal that DEUS found learning using audio media a pleasurable medium of instruction. This is an indication that audio media can be considered as an instructional strategy on its own merit.

Keywords: audio media, distance education, distance education university students, medium of instruction, satisfaction

Procedia PDF Downloads 99
2562 Spectroscopic Studies of Dy³⁺ Ions in Alkaline-Earth Boro Tellurite Glasses for Optoelectronic Devices

Authors: K. Swapna

Abstract:

A Series of Alkali-Earth Boro Tellurite (AEBT) glasses doped with different concentrations of Dy³⁺ ions have been prepared by using melt quenching technique and characterized through spectroscopic techniques such as optical absorption, excitation, emission and photoluminescence decay to understand their utility in optoelectronic devices such as lasers and white light emitting diodes (w-LEDs). Raman spectrum recorded for an undoped glass is used to measure the phonon energy of the host glass and various functional groups present in the host glass (AEBT). The intensities of the electronic transitions and the ligand environment around the Dy³⁺ ions were studied by applying Judd-Ofelt (J-O) theory to the recorded absorption spectra of the glasses. The evaluated J-O parameters are subsequently used to measure various radiative parameters such as transition probability (AR), radiative branching ratio (βR) and radiative lifetimes (τR) for the prominent fluorescent levels of Dy³⁺ ions in the as-prepared glasses. The luminescence spectra recorded at 387 nm excitation show three emission transitions (⁴F9/2→⁶H15/2 (blue), ⁴F9/2→⁶H13/2 (yellow) and ⁴F9/2 → ⁶H11/2 (red)) of which the yellow transition observed at 575 nm is found to be highly intense. The experimental branching ratio (βexp) and stimulated emission crosssection (σse) were measured from luminescence spectra. The experimental lifetimes (τexp) measured from the decay spectral profiles are combined with radiative lifetimes to measure quantum efficiencies of the as-prepared glasses. The yellow to blue intensity ratios and chromaticity color coordinates are found to vary with Dy³⁺ ion concentrations. The aforementioned results reveal that these glasses are aptly suitable for w-LEDs and laser devices.

Keywords: glasses, J-O parameters, photoluminescence, I-H model

Procedia PDF Downloads 129
2561 Functional Vision of Older People in Galician Nursing Homes

Authors: C. Vázquez, L. M. Gigirey, C. P. del Oro, S. Seoane

Abstract:

Early detection of visual problems plays a key role in the aging process. However, although vision problems are common among older people, the percentage of aging people who perform regular optometric exams is low. In fact, uncorrected refractive errors are one of the main causes of visual impairment in this group of the population. Purpose: To evaluate functional vision of older residents in order to show the urgent need of visual screening programs in Galician nursing homes. Methodology: We examined 364 older adults aged 65 years and over. To measure vision of the daily living, we tested distance and near presenting visual acuity (binocular visual acuity with habitual correction if warn, directional E-Snellen) Presenting near vision was tested at the usual working distance. We defined visual impairment (distance and near) as a presenting visual acuity less than 0.3. Exclusion criteria included immobilized residents unable to reach the USC Dual Sensory Loss Unit for visual screening. Association between categorical variables was performed using chi-square tests. We used Pearson and Spearman correlation tests and the variance analysis to determine differences between groups of interest. Results: 23,1% of participants have visual impairment for distance vision and 16,4% for near vision. The percentage of residents with far and near visual impairment reaches 8,2%. As expected, prevalence of visual impairment increases with age. No differences exist with regard to the level of functional vision between gender. Differences exist between age group respect to distance vision, but not in case of near vision. Conclusion: prevalence of visual impairment is high among the older people tested in this pilot study. This means a high percentage of older people with limitations in their daily life activities. It is necessary to develop an effective vision screening program for early detection of vision problems in Galician nursing homes.

Keywords: functional vision, elders, aging, nursing homes

Procedia PDF Downloads 385
2560 An Optimal Control Model to Determine Body Forces of Stokes Flow

Authors: Yuanhao Gao, Pin Lin, Kees Weijer

Abstract:

In this paper, we will determine the external body force distribution with analysis of stokes fluid motion using mathematical modelling and numerical approaching. The body force distribution is regarded as the unknown variable and could be determined by the idea of optimal control theory. The Stokes flow motion and its velocity are generated by given forces in a unit square domain. A regularized objective functional is built to match the numerical result of flow velocity with the generated velocity data. So that the force distribution could be determined by minimizing the value of objective functional, which is also the difference between the numerical and experimental velocity. Then after utilizing the Lagrange multiplier method, some partial differential equations are formulated consisting the optimal control system to solve. Finite element method and conjugate gradient method are used to discretize equations and deduce the iterative expression of target body force to compute the velocity numerically and body force distribution. Programming environment FreeFEM++ supports the implementation of this model.

Keywords: optimal control model, Stokes equation, finite element method, conjugate gradient method

Procedia PDF Downloads 373
2559 Surface Quality Improvement of Abrasive Waterjet Cutting for Spacecraft Structure

Authors: Tarek M. Ahmed, Ahmed S. El Mesalamy, Amro M. Youssef, Tawfik T. El Midany

Abstract:

Abrasive waterjet (AWJ) machining is considered as one of the most powerful cutting processes. It can be used for cutting heat sensitive, hard and reflective materials. Aluminum 2024 is a high-strength alloy which is widely used in aerospace and aviation industries. This paper aims to improve aluminum alloy and to investigate the effect of AWJ control parameters on surface geometry quality. Design of experiments (DoE) is used for establishing an experimental matrix. Statistical modeling is used to present a relation between the cutting parameters (pressure, speed, and distance between the nozzle and cut surface) and responses (taper angle and surface roughness). The results revealed a tangible improvement in productivity by using AWJ processing. The taper kerf angle can be improved by decreasing standoff distance and speed and increasing water pressure. While decreasing (cutting speed, pressure and distance between the nozzle and cut surface) improve the surface roughness in the operating window of cutting parameters.

Keywords: abrasive waterjet machining, machining of aluminum alloy, non-traditional cutting, statistical modeling

Procedia PDF Downloads 229
2558 Molecular Modeling of 17-Picolyl and 17-Picolinylidene Androstane Derivatives with Anticancer Activity

Authors: Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Lidija Jevrić, Evgenija Djurendić, Jovana Ajduković

Abstract:

In the present study, the molecular modeling of a series of 24 17-picolyl and 17-picolinylidene androstane derivatives whit significant anticancer activity was carried out. Modelling of studied compounds was performed by CS ChemBioDraw Ultra v12.0 program for drawing 2D molecular structures and CS ChemBio3D Ultra v12.0 for 3D molecular modelling. The obtained 3D structures were subjected to energy minimization using molecular mechanics force field method (MM2). The cutoff for structure optimization was set at a gradient of 0.1 kcal/Åmol. Full geometry optimization was done by the Austin Model 1 (AM1) until the root mean square (RMS) gradient reached a value smaller than 0.0001 kcal/Åmol using Molecular Orbital Package (MOPAC) program. The obtained physicochemical, lipophilicity and topological descriptors were used for analysis of molecular similarities and dissimilarities applying suitable chemometric methods (principal component analysis and cluster analysis). These results are the part of the project No. 114-451-347/2015-02, financially supported by the Provincial Secretariat for Science and Technological Development of Vojvodina and CMST COST Action CM1306.

Keywords: androstane derivatives, anticancer activity, chemometrics, molecular descriptors

Procedia PDF Downloads 332
2557 Tribological Performance of Polymer Syntactic Foams in Low-Speed Conditions

Authors: R. Narasimha Rao, Ch. Sri Chaitanya

Abstract:

Syntactic foams are closed-cell foams with high specific strength and high compression strength. At Low speeds, the wear rate is sensitive to the sliding speeds and other tribological parameters like applied load and the sliding distance. In the present study, the tribological performance of the polymer-based syntactic foams was reported based on the experiments conducted on a pin-on-disc tribometer. The syntactic foams were manufactured with epoxy as the matrix and the cenospheres obtained from the thermal powerplants as the reinforcement. The experiments were conducted at a sliding speed of the 1 m/s. The applied load was varied from 1 kg to 5 kg up to a sliding distance of 3000 m. The wear rate increased with the sliding distance at lower loads. The trend was reversed at higher loads of 5kg. This may be due to the high plastic deformation at the initial stages when higher loads were applied. This was evident with the higher friction constants for the higher loads. The adhesive wear was found to be predominant for lower loads, while the abrasive wear tracks can be seen in micrographs of samples tested under higher loads.

Keywords: sliding speed, syntactic foams, tribological performance, wear rate

Procedia PDF Downloads 53
2556 Different Data-Driven Bivariate Statistical Approaches to Landslide Susceptibility Mapping (Uzundere, Erzurum, Turkey)

Authors: Azimollah Aleshzadeh, Enver Vural Yavuz

Abstract:

The main goal of this study is to produce landslide susceptibility maps using different data-driven bivariate statistical approaches; namely, entropy weight method (EWM), evidence belief function (EBF), and information content model (ICM), at Uzundere county, Erzurum province, in the north-eastern part of Turkey. Past landslide occurrences were identified and mapped from an interpretation of high-resolution satellite images, and earlier reports as well as by carrying out field surveys. In total, 42 landslide incidence polygons were mapped using ArcGIS 10.4.1 software and randomly split into a construction dataset 70 % (30 landslide incidences) for building the EWM, EBF, and ICM models and the remaining 30 % (12 landslides incidences) were used for verification purposes. Twelve layers of landslide-predisposing parameters were prepared, including total surface radiation, maximum relief, soil groups, standard curvature, distance to stream/river sites, distance to the road network, surface roughness, land use pattern, engineering geological rock group, topographical elevation, the orientation of slope, and terrain slope gradient. The relationships between the landslide-predisposing parameters and the landslide inventory map were determined using different statistical models (EWM, EBF, and ICM). The model results were validated with landslide incidences, which were not used during the model construction. In addition, receiver operating characteristic curves were applied, and the area under the curve (AUC) was determined for the different susceptibility maps using the success (construction data) and prediction (verification data) rate curves. The results revealed that the AUC for success rates are 0.7055, 0.7221, and 0.7368, while the prediction rates are 0.6811, 0.6997, and 0.7105 for EWM, EBF, and ICM models, respectively. Consequently, landslide susceptibility maps were classified into five susceptibility classes, including very low, low, moderate, high, and very high. Additionally, the portion of construction and verification landslides incidences in high and very high landslide susceptibility classes in each map was determined. The results showed that the EWM, EBF, and ICM models produced satisfactory accuracy. The obtained landslide susceptibility maps may be useful for future natural hazard mitigation studies and planning purposes for environmental protection.

Keywords: entropy weight method, evidence belief function, information content model, landslide susceptibility mapping

Procedia PDF Downloads 107
2555 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip

Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas

Abstract:

A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.

Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration

Procedia PDF Downloads 366
2554 Railway Transport as a Potential Source of Polychlorinated Biphenyls in Soil

Authors: Nataša Stojić, Mira Pucarević, Nebojša Ralević, Vojislava Bursić, Gordan Stojić

Abstract:

Surface soil (0 – 10 cm) samples from 52 sampling sites along the length of railway tracks on the territory of Srem (the western part of the Autonomous Province of Vojvodina, itself part of Serbia) were collected and analyzed for 7 polychlorinated biphenyls (PCBs) in order to see how the distance from the railroad on the one hand and dump on the other hand, affect the concentration of PCBs (CPCBs) in the soil. Samples were taken at a distance of 0.03 to 4.19 km from the railway and 0.43 to 3.35 km from the landfills. For the soil extraction the Soxhlet extraction (USEPA 3540S) was used. The extracts were purified on a silica-gel column (USEPA 3630C). The analysis of the extracts was performed by gas chromatography with tandem mass spectrometry. PCBs were not detected only at two locations. Mean total concentration of PCBs for all other sampling locations was 0,0043 ppm dry weight (dw) with a range of 0,0005 to 0,0227 ppm dw. On the part of the data that were interesting for this research with statistical methods (PCA) were isolated factors that affect the concentration of PCBs. Data were also analyzed using the Pearson's chi-squared test which showed that the hypothesis of independence of CPCBs and distance from the railway can be rejected. Hypothesis of independence between CPCB and the percentage of humus in the soil can also be rejected, in contrast to dependence of CPCB and the distance from the landfill where the hypothesis of independence cannot be rejected. Based on these results can be said that railway transport is a potential source of PCBs. The next step in this research is to establish the position of transformers which are located near sampling sites as another important factor that affects the concentration of PCBs in the soil.

Keywords: GC/MS, landfill, PCB, railway, soil

Procedia PDF Downloads 303
2553 Understanding the Influence of Cross-National Distances on Tourist Expenditure

Authors: Wei-Ting Hung

Abstract:

Inbound tourist expenditure might not only have influenced by individual tourist characteristics but may also be affected by nationality characteristics. The cross national distance effects on tourist consumption behavior should be incorporated in the analytical framework. Additionally, the often used factor analysis, cluster analysis and regression analysis overlook the hierarchical tourist consumption data structure and may lead to misleading results. The objectives of the present study were twofold. First, we propose a multilevel model that takes individual and cross-national differences into account under a hierarchical framework. Second, we further sought to determine the types of cross-national differences affecting tourist expenditure. Thus, this study incorporates the individual tourist effects and cross national distance effects simultaneously, uses the data of 2010 Annual Survey Report on Visitors’ Expenditure and Trends in Taiwan to investigate the determinants of inbound tourist expenditure. Multilevel analysis was used to investigate the influence of individual tourist effects and cross national distance effects on inbound tourist expenditure. The empirical results show that cross national distance plays a crucial role in tourist consumption behavior. Our findings also indicate age and income have positive influence on tourism expenditure., whereas education and gender do not have significant impact. Regarding macro-level factors, geographic and cultural differences exhibited significant positive relationships on tourism expenditure, while economic differences did not. Based on the above empirical results, it is suggested that tour operators should take tourists’ individual attributes, particularly their income and age, into consideration when arranging tours. In addition, nationality holds sway over tourists’ consumption behavior, of which geographic and cultural differences are the two major factors at play. The empirical results of this study serve as practical suggestions for tourism marketing strategies and policy implications for government policies.

Keywords: cross national distance, inbound tourist, multilevel analysis, tourist expenditure

Procedia PDF Downloads 329
2552 Competence on Learning Delivery Modes and Performance of Physical Education Teachers in Senior High Schools in Davao

Authors: Juvanie C. Lapesigue

Abstract:

Worldwide school closures result from a significant public health crisis that has affected the nation and the entire world. It has affected students, educators, educational organizations globally, and many other aspects of society. Academic institutions worldwide teach students using diverse approaches of various learning delivery modes. This paper investigates the competence and performance of physical education teachers using various learning delivery modes, including Distance learning, Blended Learning, and Homeschooling during online distance education. To identify the Gap between their age generation using various learning delivery that affects teachers' preparation for distance learning and evaluates how these modalities impact teachers’ competence and performance in the case of a pandemic. The respondents were the Senior High School teachers of the Department of Education who taught in Davao City before and during the pandemic. Purposive sampling was utilized on 61 Senior High School Teachers in Davao City Philippines. The result indicated that teaching performance based on pedagogy and assessment has significantly affected teaching performance in teaching physical education, particularly those Non-PE teachers teaching physical education subjects. It should be supplied with enhancement training workshops to help them be more successful in preparation in terms of teaching pedagogy and assessment in the following norm. Hence, a proposed unique training design for non-P.E. Teachers has been created to improve the teachers’ performance in terms of pedagogy and assessment in teaching P.E subjects in various learning delivery modes in the next normal.

Keywords: distance learning, learning delivery modes, P.E teachers, senior high school, teaching competence, teaching performance

Procedia PDF Downloads 68
2551 Hybridized Approach for Distance Estimation Using K-Means Clustering

Authors: Ritu Vashistha, Jitender Kumar

Abstract:

Clustering using the K-means algorithm is a very common way to understand and analyze the obtained output data. When a similar object is grouped, this is called the basis of Clustering. There is K number of objects and C number of cluster in to single cluster in which k is always supposed to be less than C having each cluster to be its own centroid but the major problem is how is identify the cluster is correct based on the data. Formulation of the cluster is not a regular task for every tuple of row record or entity but it is done by an iterative process. Each and every record, tuple, entity is checked and examined and similarity dissimilarity is examined. So this iterative process seems to be very lengthy and unable to give optimal output for the cluster and time taken to find the cluster. To overcome the drawback challenge, we are proposing a formula to find the clusters at the run time, so this approach can give us optimal results. The proposed approach uses the Euclidian distance formula as well melanosis to find the minimum distance between slots as technically we called clusters and the same approach we have also applied to Ant Colony Optimization(ACO) algorithm, which results in the production of two and multi-dimensional matrix.

Keywords: ant colony optimization, data clustering, centroids, data mining, k-means

Procedia PDF Downloads 106
2550 The Twelfth Rib as a Landmark for Surgery

Authors: Jake Tempo, Georgina Williams, Iain Robertson, Claire Pascoe, Darren Rama, Richard Cetti

Abstract:

Introduction: The twelfth rib is commonly used as a landmark for surgery; however, its variability in length has not been formally studied. The highly variable rib length provides a challenge for urologists seeking a consistent landmark for percutaneous nephrolithotomy and retroperitoneoscopic surgery. Methods and materials: We analysed CT scans of 100 adults who had imaging between 23rd March and twelfth April 2020 at an Australian Hospital. We measured the distance from the mid-sagittal line to the twelfth rib tip in the axial plane as a surrogate for true rib length. We also measured the distance from the twelfth rib tip to the kidney, spleen, and liver. Results: Length from the mid-sagittal line to the right twelfth rib tip varied from 46 (percentile 95%CI 40 to 57) to 136mm (percentile 95%CI 133 to 138). On the left, the distances varied from 55 (percentile 95%CI 50 to 64) to 134mm (percentile 95%CI 131 to 135). Twenty-three percent of people had an organ lying between the tip of the twelfth rib and the kidney on the right, and 11% of people had the same finding on the left. Conclusion: The twelfth rib is highly variable in its length. Similar variability was recorded in the distance from the tip to intra-abdominal organs. Due to the frequency of organs lying between the tip of the rib and the kidney, it should not be used as a landmark for accessing the kidney without prior knowledge of an individual patient’s anatomy, as seen on imaging.

Keywords: PCNL, rib, anatomy, nephrolithotomy

Procedia PDF Downloads 85
2549 Magnetic Lines of Force and Diamagnetism

Authors: Angel Pérez Sánchez

Abstract:

Magnet attraction or repulsion is not a product of a strange force from afar but comes from anchored lines of force inside the magnet as if it were reinforced concrete since you can move a small block by taking the steel rods that protrude from its interior. This approach serves as a basis for studying the behavior of diamagnetic materials. The significance of this study is to unify all diamagnetic phenomena: Movement of grapes, cooper approaching a magnet, Magnet levitation, etc., with a single explanation for all these phenomena. The method followed has consisted of observation of hundreds of diamagnetism experiments (in copper, aluminum, grapes, tomatoes, and bismuth), including the creation of own and new experiments and application of logical deduction product of these observations. Approaching a magnet to a hanging grape, Diamagnetism seems to consist not only of a slight repulsion but also of a slight attraction at a small distance. Replacing the grapes with a copper sphere, it behaves like the grape, pushing and pulling a nearby magnet. Diamagnetism could be redefined in the following way: There are materials that don't magnetize their internal structure when approaching a magnet, as ferromagnetic materials do. But they do allow magnetic lines of force to run through its interior, enhancing them without creating their own lines of force. Magnet levitates on superconducting ceramics because magnet gives lines near poles a force superior to what a superconductor can enhance these lines. Little further from the magnet, enhancing of lines by the superconductor is greater than the strength provided by the magnet due to the distance from the magnet's pole. It is this point that defines the magnet's levitation band. The anchoring effect of lines is what ultimately keeps the magnet and superconductor at a certain distance. The magnet seeks to levitate the area in which magnetic lines are stronger near de magnet's poles. Pouring ferrofluid into a magnet, lines of force are observed coming out of the poles. On other occasions, diamagnetic materials simply enhance the lines they receive without moving their position since their own weight is greater than the strength of the enhanced lines. (This is the case with grapes and copper). Magnet and diamagnetic materials look for a place where the lines of force are most enhanced, and this is at a small distance. Once the ideal distance is established, they tend to keep it by pushing or pulling on each other. At a certain distance from the magnet: the power exerted by diamagnetic materials is greater than the force of lines in the vicinity of the magnet's poles. All Diamagnetism phenomena: copper, aluminum, grapes, tomatoes, bismuth levitation, and magnet levitation on superconducting ceramics can now be explained with the support of magnetic lines of force.

Keywords: diamagnetism, magnetic levitation, magnetic lines of force, enhancing magnetic lines

Procedia PDF Downloads 66
2548 Slope Stability Study at Jalan Tun Sardon and Sungai Batu, Pulau Pinang, Malaysia by Using 2-D Resistivity Method

Authors: Muhamad Iqbal Mubarak Faharul Azman, Azim Hilmy Mohd Yusof, Nur Azwin Ismail, Noer El Hidayah Ismail

Abstract:

Landslides and rock falls are the examples of environmental and engineering problems in Malaysia. There are various methods that can be applied for the environmental and engineering problems but geophysical methods are seldom applied as the main investigation technique. This paper aims to study the slope stability by using 2-D resistivity method at Jalan Tun Sardon and Sungai Batu, Pulau Pinang. These areas are considered as highly potential for unstable slope in Penang Island based on recent cases of rockfall and landslide reported especially during raining season. At both study areas, resistivity values greater than 5000 ohm-m are detected and considered as the fresh granite. The weathered granite is indicated by resistivity value of 750-1500 ohm-m with depth of < 14 meters at Sungai Batu area while at Jalan Tun Sardon area, the weathered granite with resistivity values of 750-2000 ohm-m is found at depth < 14 meter at distance 0-90 meter but at distance of 95-150 meter, the weathered granite is found at depth < 26 meter. Saturated zone is detected only at Sungai Batu with resistivity value <250 ohm-m at distance 100-120 meter. A fracture is detected at distance about 70 meter at Jalan Tun Sardon area. Unstable slope is expected to be affected by the weathered granite that dominates the subsurface of the study areas along with triggering factor such as heavy rainfall.

Keywords: 2-D resistivity, environmental issue, landslide, slope stability

Procedia PDF Downloads 201
2547 The Effects of Cultural Distance and Institutions on Foreign Direct Investment Choices: Evidence from Turkey and China

Authors: Nihal Kartaltepe Behram, Göksel Ataman, Dila Okçu

Abstract:

With the development of foreign direct investments, the social, cultural, political and economic interactions between countries and institutions have become visible and they have become determining factors for the strategic structuring and market goals. In this context the purpose of this study is to investigate the effects of cultural distance and institutions on foreign direct investment choices in terms of location and investment model. For international establishments, the concept of culture, as well as the concept of cultural distance, is taken specifically into consideration, especially in the selection of methods for entering the market. In the researches and empirical studies conducted, a direct relationship between cultural distance and foreign direct investments is set and institutions and effective variable factors are examined at the level of defining the investment types. When the detailed calculation strategies and empirical researches and studies are taken into consideration, the most common methods for determining the direct investment model, considering the cultural distances, are full-ownership enterprises and joint ventures. Also, when all of the factors affecting the investments are taken into consideration, it was seen that the effect of institutions such as Government Intervention, Intellectual Property Rights, Corruption and Contract Enforcements is very important. Furthermore agglomeration is more intense and effective on the investment, compared to other factors. China has been selected as the target country, due to its effectiveness in world economy and its contributions to developing countries, which has commercial relationships with. Qualitative research methods are used for this study conducted, to measure the effects of determinative variable factors in the hypotheses of study, on the direct foreign investors and to evaluate the findings. In this study in-depth interview is used as a data collection method and the data analysis is made through descriptive analysis. Foreign Direct Investments are so reactive to institutions and cultural distance is identified by all interviews and analysis. On the other hand, agglomeration is the most strong determiner factor on foreign direct investors in Chinese Market. The reason of this factors, which comprise the sectorial aggregate, are not the strongest factors as agglomeration that the most important finding. We expect that this study became a beneficial guideline for developed and developing countries and local and national institutions’ strategic plans.

Keywords: China, cultural distance, Foreign Direct Investments, institutions

Procedia PDF Downloads 392
2546 Using Integrative Assessment in Distance Learning: The Case of Department of Education - Navotas City

Authors: Meduranda Marco

Abstract:

This paper aimed to discuss the Integrative Assessment (IA) initiative of the Schools Division Office - Navotas City. The introduction provided a brief landscape analysis of the current state of education, the context of SDO Navotas, and the rationale for the administration of Integrative Assessment (IA) in schools. The IA methodology, procedure, and implementation activities were also shared. Feedback and reports on IA showed positive results as all schools in the Division were able to operationalize IA and consequently foster academic ease for learners and parents. Challenges met after compliance were also documented and strategies to continuously improve the Integrative Assessment process were proposed.

Keywords: distance learning assessment, integrative assessment, academic ease, learning outcomes evaluation

Procedia PDF Downloads 115
2545 Comparative Analysis of Dissimilarity Detection between Binary Images Based on Equivalency and Non-Equivalency of Image Inversion

Authors: Adnan A. Y. Mustafa

Abstract:

Image matching is a fundamental problem that arises frequently in many aspects of robot and computer vision. It can become a time-consuming process when matching images to a database consisting of hundreds of images, especially if the images are big. One approach to reducing the time complexity of the matching process is to reduce the search space in a pre-matching stage, by simply removing dissimilar images quickly. The Probabilistic Matching Model for Binary Images (PMMBI) showed that dissimilarity detection between binary images can be accomplished quickly by random pixel mapping and is size invariant. The model is based on the gamma binary similarity distance that recognizes an image and its inverse as containing the same scene and hence considers them to be the same image. However, in many applications, an image and its inverse are not treated as being the same but rather dissimilar. In this paper, we present a comparative analysis of dissimilarity detection between PMMBI based on the gamma binary similarity distance and a modified PMMBI model based on a similarity distance that does distinguish between an image and its inverse as being dissimilar.

Keywords: binary image, dissimilarity detection, probabilistic matching model for binary images, image mapping

Procedia PDF Downloads 121
2544 Kriging-Based Global Optimization Method for Bluff Body Drag Reduction

Authors: Bingxi Huang, Yiqing Li, Marek Morzynski, Bernd R. Noack

Abstract:

We propose a Kriging-based global optimization method for active flow control with multiple actuation parameters. This method is designed to converge quickly and avoid getting trapped into local minima. We follow the model-free explorative gradient method (EGM) to alternate between explorative and exploitive steps. This facilitates a convergence similar to a gradient-based method and the parallel exploration of potentially better minima. In contrast to EGM, both kinds of steps are performed with Kriging surrogate model from the available data. The explorative step maximizes the expected improvement, i.e., favors regions of large uncertainty. The exploitive step identifies the best location of the cost function from the Kriging surrogate model for a subsequent weight-biased linear-gradient descent search method. To verify the effectiveness and robustness of the improved Kriging-based optimization method, we have examined several comparative test problems of varying dimensions with limited evaluation budgets. The results show that the proposed algorithm significantly outperforms some model-free optimization algorithms like genetic algorithm and differential evolution algorithm with a quicker convergence for a given budget. We have also performed direct numerical simulations of the fluidic pinball (N. Deng et al. 2020 J. Fluid Mech.) on three circular cylinders in equilateral-triangular arrangement immersed in an incoming flow at Re=100. The optimal cylinder rotations lead to 44.0% net drag power saving with 85.8% drag reduction and 41.8% actuation power. The optimal results for active flow control based on this configuration have achieved boat-tailing mechanism by employing Coanda forcing and wake stabilization by delaying separation and minimizing the wake region.

Keywords: direct numerical simulations, flow control, kriging, stochastic optimization, wake stabilization

Procedia PDF Downloads 83
2543 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms

Authors: Abdul Rehman, Bo Liu

Abstract:

Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.

Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization

Procedia PDF Downloads 209
2542 Abdominal Organ Segmentation in CT Images Based On Watershed Transform and Mosaic Image

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

Accurate Liver, spleen and kidneys segmentation in abdominal CT images is one of the most important steps for computer aided abdominal organs pathology diagnosis. In this paper, we have proposed a new semi-automatic algorithm for Liver, spleen and kidneys area extraction in abdominal CT images. Our proposed method is based on hierarchical segmentation and watershed algorithm. In our approach, a powerful technique has been designed to suppress over-segmentation based on mosaic image and on the computation of the watershed transform. The algorithm is currency in two parts. In the first, we seek to improve the quality of the gradient-mosaic image. In this step, we propose a method for improving the gradient-mosaic image by applying the anisotropic diffusion filter followed by the morphological filters. Thereafter we proceed to the hierarchical segmentation of the liver, spleen and kidney. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.

Keywords: anisotropic diffusion filter, CT images, morphological filter, mosaic image, multi-abdominal organ segmentation, mosaic image, the watershed algorithm

Procedia PDF Downloads 468
2541 Boundary Feedback Stabilization of an Overhead Crane Model

Authors: Abdelhadi Elharfi

Abstract:

A problem of boundary feedback (exponential) stabilization of an overhead crane model represented by a PDE is considered. For any $r>0$, the exponential stability at the desired decay rate $r$ is solved in semi group setting by a collocated-type stabiliser of a target system combined with a term involving the solution of an appropriate PDE.

Keywords: feedback stabilization, semi group and generator, overhead crane system

Procedia PDF Downloads 386
2540 Vector Quantization Based on Vector Difference Scheme for Image Enhancement

Authors: Biji Jacob

Abstract:

Vector quantization algorithm which uses minimum distance calculation for codebook generation, a time consuming calculation performed on each pixel values leads to computation complexity. The codebook is updated by comparing the distance of each vector to their centroid vector and measure for their closeness. In this paper vector quantization is modified based on vector difference algorithm for image enhancement purpose. In the proposed scheme, vector differences between the vectors are considered as the new generation vectors or new codebook vectors. The codebook is updated by comparing the new generation vector with a threshold value having minimum error with the parent vector. The minimum error decides the fitness of each newly generated vector. Thus the codebook is generated in an adaptive manner and the fitness value is determined for the suppression of the degraded portion of the image and thereby leads to the enhancement of the image through the adaptive searching capability of the vector quantization through vector difference algorithm. Experimental results shows that the vector difference scheme efficiently modifies the vector quantization algorithm for enhancing the image with peak signal to noise ratio (PSNR), mean square error (MSE), Euclidean distance (E_dist) as the performance parameters.

Keywords: codebook, image enhancement, vector difference, vector quantization

Procedia PDF Downloads 239
2539 Estimation of Carbon Uptake of Seoul City Street Trees in Seoul and Plans for Increase Carbon Uptake by Improving Species

Authors: Min Woo Park, Jin Do Chung, Kyu Yeol Kim, Byoung Uk Im, Jang Woo Kim, Hae Yeul Ryu

Abstract:

Nine representative species of trees among all the street trees were selected to estimate the absorption amount of carbon dioxide emitted from street trees in Seoul calculating the biomass, amount of carbon saved, and annual absorption amount of carbon dioxide in each of the species. Planting distance of street trees in Seoul was 1,851,180 m, the number of planting lines was 1,287, the number of planted trees was 284,498 and 46 species of trees were planted as of 2013. According to the result of plugging the quantity of species of street trees in Seoul on the absorption amount of each of the species, 120,097 ton of biomass, 60,049.8 ton of amount of carbon saved, and 11,294 t CO2/year of annual absorption amount of carbon dioxide were calculated. Street ratio mentioned on the road statistics in Seoul in 2022 is 23.13%. If the street trees are assumed to be increased in the same rate, the number of street trees in Seoul was calculated to be 294,823. The planting distance was estimated to be 1,918,360 m, and the annual absorption amount of carbon dioxide was measured to be 11,704 t CO2/year. Plans for improving the annual absorption amount of carbon dioxide from street trees were established based on the expected amount of absorption. First of all, it is to improve the annual absorption amount of carbon dioxide by increasing the number of planted street trees after adjusting the planting distance of street trees. If adjusting the current planting distance to 6 m, it was turned out that 12,692.7 t CO2/year was absorbed on an annual basis. Secondly, it is to change the species of trees to tulip trees that represent high absorption rate. If increasing the proportion of tulip trees to 30% up to 2022, the annual absorption rate of carbon dioxide was calculated to be 17804.4 t CO2/year.

Keywords: absorption of carbon dioxide, source of absorbing carbon dioxide, trees in city, improving species

Procedia PDF Downloads 337