Search results for: bi-conjugate gradient stabilized method
16146 Evaluation of Academic Research Projects Using the AHP and TOPSIS Methods
Authors: Murat Arıbaş, Uğur Özcan
Abstract:
Due to the increasing number of universities and academics, the fund of the universities for research activities and grants/supports given by government institutions have increased number and quality of academic research projects. Although every academic research project has a specific purpose and importance, limited resources (money, time, manpower etc.) require choosing the best ones from all (Amiri, 2010). It is a pretty hard process to compare and determine which project is better such that the projects serve different purposes. In addition, the evaluation process has become complicated since there are more than one evaluator and multiple criteria for the evaluation (Dodangeh, Mojahed and Yusuff, 2009). Mehrez and Sinuany-Stern (1983) determined project selection problem as a Multi Criteria Decision Making (MCDM) problem. If a decision problem involves multiple criteria and objectives, it is called as a Multi Attribute Decision Making problem (Ömürbek & Kınay, 2013). There are many MCDM methods in the literature for the solution of such problems. These methods are AHP (Analytic Hierarchy Process), ANP (Analytic Network Process), TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation), UTADIS (Utilities Additives Discriminantes), ELECTRE (Elimination et Choix Traduisant la Realite), MAUT (Multiattribute Utility Theory), GRA (Grey Relational Analysis) etc. Teach method has some advantages compared with others (Ömürbek, Blacksmith & Akalın, 2013). Hence, to decide which MCDM method will be used for solution of the problem, factors like the nature of the problem, types of choices, measurement scales, type of uncertainty, dependency among the attributes, expectations of decision maker, and quantity and quality of the data should be considered (Tavana & Hatami-Marbini, 2011). By this study, it is aimed to develop a systematic decision process for the grant support applications that are expected to be evaluated according to their scientific adequacy by multiple evaluators under certain criteria. In this context, project evaluation process applied by The Scientific and Technological Research Council of Turkey (TÜBİTAK) the leading institutions in our country, was investigated. Firstly in the study, criteria that will be used on the project evaluation were decided. The main criteria were selected among TÜBİTAK evaluation criteria. These criteria were originality of project, methodology, project management/team and research opportunities and extensive impact of project. Moreover, for each main criteria, 2-4 sub criteria were defined, hence it was decided to evaluate projects over 13 sub-criterion in total. Due to superiority of determination criteria weights AHP method and provided opportunity ranking great number of alternatives TOPSIS method, they are used together. AHP method, developed by Saaty (1977), is based on selection by pairwise comparisons. Because of its simple structure and being easy to understand, AHP is the very popular method in the literature for determining criteria weights in MCDM problems. Besides, the TOPSIS method developed by Hwang and Yoon (1981) as a MCDM technique is an alternative to ELECTRE method and it is used in many areas. In the method, distance from each decision point to ideal and to negative ideal solution point was calculated by using Euclidian Distance Approach. In the study, main criteria and sub-criteria were compared on their own merits by using questionnaires that were developed based on an importance scale by four relative groups of people (i.e. TUBITAK specialists, TUBITAK managers, academics and individuals from business world ) After these pairwise comparisons, weight of the each main criteria and sub-criteria were calculated by using AHP method. Then these calculated criteria’ weights used as an input in TOPSİS method, a sample consisting 200 projects were ranked on their own merits. This new system supported to opportunity to get views of the people that take part of project process including preparation, evaluation and implementation on the evaluation of academic research projects. Moreover, instead of using four main criteria in equal weight to evaluate projects, by using weighted 13 sub-criteria and decision point’s distance from the ideal solution, systematic decision making process was developed. By this evaluation process, new approach was created to determine importance of academic research projects.Keywords: Academic projects, Ahp method, Research projects evaluation, Topsis method.
Procedia PDF Downloads 59316145 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour
Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale
Abstract:
Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.Keywords: artificial neural network, back-propagation, tide data, training algorithm
Procedia PDF Downloads 48616144 Simplifying Seismic Vulnerability Analysis for Existing Reinforced Concrete Buildings
Authors: Maryam Solgi, Behzad Shahmohammadi, Morteza Raissi Dehkordi
Abstract:
One of the main steps for seismic retrofitting of buildings is to determine the vulnerability of structures. While current procedures for evaluating existing buildings are complicated, and there is no limitation between short, middle-high, and tall buildings. This research utilizes a simplified method for assessing structures, which is adequate for existing reinforced concrete buildings. To approach this aim, Simple Lateral Mechanisms Analysis (SLaMA) procedure proposed by NZSEE (New Zealand Society for Earthquake Engineering) has been carried out. In this study, three RC moment-resisting frame buildings are determined. First, these buildings have been evaluated by inelastic static procedure (Pushover) based on acceptance criteria. Then, Park-Ang Damage Index is determined for the whole members of each building by Inelastic Time History Analysis. Next, the Simple Lateral Mechanisms Analysis procedure, a hand method, is carried out to define the capacity of structures. Ultimately, existing procedures are compared with Peak Ground Acceleration caused to fail (PGAfail). The results of this comparison emphasize that the Pushover procedure and SLaMA method define a greater value of PGAfail than the Park-Ang Damage model.Keywords: peak ground acceleration caused to fail, reinforced concrete moment-frame buildings, seismic vulnerability analysis, simple lateral mechanisms analysis
Procedia PDF Downloads 9716143 Displacement Based Design of a Dual Structural System
Authors: Romel Cordova Shedan
Abstract:
The traditional seismic design is the methodology of Forced Based Design (FBD). The Displacement Based Design (DBD) is a seismic design that considers structural damage to achieve a failure mechanism of the structure before the collapse. It is easier to quantify damage of a structure with displacements rather than forces. Therefore, a structure to achieve an inelastic displacement design with good ductility, it is necessary to be damaged. The first part of this investigation is about differences between the methodologies of DBD and FBD with some DBD advantages. In the second part, there is a study case about a dual building 5-story, which is regular in plan and elevation. The building is located in a seismic zone, which acceleration in firm soil is 45% of the acceleration of gravity. Then it is applied both methodologies into the study case to compare its displacements, shear forces and overturning moments. In the third part, the Dynamic Time History Analysis (DTHA) is done, to compare displacements with DBD and FBD methodologies. Three accelerograms were used and the magnitude of the acceleration scaled to be spectrum compatible with design spectrum. Then, using ASCE 41-13 guidelines, the hinge plastics were assigned to structure. Finally, both methodologies results about study case are compared. It is important to take into account that the seismic performance level of the building for DBD is greater than FBD method. This is due to drifts of DBD are in the order of 2.0% and 2.5% comparing with FBD drifts of 0.7%. Therefore, displacements of DBD is greater than the FBD method. Shear forces of DBD result greater than FBD methodology. These strengths of DBD method ensures that structure achieves design inelastic displacements, because those strengths were obtained due to a displacement spectrum reduction factor which depends on damping and ductility of the dual system. Also, the displacements for the study case for DBD results to be greater than FBD and DTHA. In that way, it proves that the seismic performance level of the building for DBD is greater than FBD method. Due to drifts of DBD which are in the order of 2.0% and 2.5% compared with little FBD drifts of 0.7%.Keywords: displacement-based design, displacement spectrum reduction factor, dynamic time history analysis, forced based design
Procedia PDF Downloads 22916142 Empirical Mode Decomposition Based Denoising by Customized Thresholding
Authors: Wahiba Mohguen, Raïs El’hadi Bekka
Abstract:
This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).Keywords: customized thresholding, ECG signal, EMD, hard thresholding, soft-thresholding
Procedia PDF Downloads 30316141 The Attitudinal Effects of Dental Hygiene Students When Changing Conventional Practices of Preventive Therapy in the Dental Hygiene Curriculum
Authors: Shawna Staud, Mary Kaye Scaramucci
Abstract:
Objective: Rubber cup polishing has been a traditional method of preventative therapy in dental hygiene treatment. Newer methods such as air polishing have changed the way dental hygiene care is provided, yet this technique has not been embraced by students in the program nor by practitioners in the workforce. Students entering the workforce tend to follow office protocol and are limited in confidence to introduce technologies learned in the curriculum. This project was designed to help students gain confidence in newer skills and encourage private practice settings to adopt newer technologies for patient care. Our program recently introduced air polishing earlier in the program before the rubber cup technique to determine if students would embrace the technology to become leading-edge professionals when they enter the marketplace. Methods: The class of 2022 was taught the traditional method of polishing in the first-year curriculum and air polishing in the second-year curriculum. The class of 2023 will be taught the air polishing method in the first-year curriculum and the traditional method of polishing in the second-year curriculum. Pre- and post-graduation survey data will be collected from both cohorts. Descriptive statistics and pre and post-paired t-tests with alpha set at .05 to compare pre and post-survey results will be used to assess data. Results: This study is currently in progress, with a completion date of October 2023. The class of 2022 completed the pre-graduation survey in the spring of 2022. The post-gradation survey will be sent out in October 2022. The class of 2023 cohort will be surveyed in the spring of 2023 and October 2023. Conclusion: Our hypothesis is students who are taught air polishing first will be more inclined to adopt that skill in private practice, thereby embracing newer technology and improving oral health care.Keywords: luggage handling system at world’s largest pilgrimage center
Procedia PDF Downloads 10516140 A Nonlinear Feature Selection Method for Hyperspectral Image Classification
Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo
Abstract:
For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine
Procedia PDF Downloads 26516139 A Method to Assess Aspect of Sustainable Development: Walkability
Authors: Amna Ali Al-Saadi, Riken Homma, Kazuhisa Iki
Abstract:
Despite the fact that many places have successes in achieving some aspects of sustainable urban development, there are no scientific facts to convince decision makers. Also, each of them was developed to fulfill the need of specific city only. Therefore, objective method to generate the solutions from a successful case is the aim of this research. The questions were: how to learn the lesson from each case study; how to distinguish the potential criteria and negative one; snd how to quantify their effects in the future development. Walkability has been selected as a goal. This is because it has been found as a solution to achieve healthy life style as well as social, environmental and economic sustainability. Moreover, it has complication as every aspect of sustainable development. This research is stand on quantitative- comparative methodology in order to assess pedestrian oriented development. Three analyzed area (AAs) were selected. One site is located in Oman in which hypotheses as motorized oriented development, while two sites are in Japan where the development is pedestrian friendly. The study used Multi- criteria evaluation method (MCEM). Initially, MCEM stands on analytic hierarchy process (AHP). The later was structured into main goal (walkability), objectives (functions and layout) and attributes (the urban form criteria). Secondly, the GIS were used to evaluate the attributes in multi-criteria maps. Since each criterion has different scale of measurement, all results were standardized by z-score and used to measure the co-relations among criteria. As results, different scenario was generated from each AA. MCEM (AHP-OWA)-GIS measured the walkability score and determined the priority of criteria development in the non-walker friendly environment. The comparison criteria for z-score presented a measurable distinguished orientation of development. This result has been used to prove that Oman is motorized environment while Japan is walkable. Also, it defined the powerful criteria and week criteria regardless to the AA. This result has been used to generalize the priority for walkable development. In conclusion, the method was found successful in generate scientific base for policy decisions.Keywords: walkability, policy decisions, sustainable development, GIS
Procedia PDF Downloads 44016138 Predicting the Impact of Scope Changes on Project Cost and Schedule Using Machine Learning Techniques
Authors: Soheila Sadeghi
Abstract:
In the dynamic landscape of project management, scope changes are an inevitable reality that can significantly impact project performance. These changes, whether initiated by stakeholders, external factors, or internal project dynamics, can lead to cost overruns and schedule delays. Accurately predicting the consequences of these changes is crucial for effective project control and informed decision-making. This study aims to develop predictive models to estimate the impact of scope changes on project cost and schedule using machine learning techniques. The research utilizes a comprehensive dataset containing detailed information on project tasks, including the Work Breakdown Structure (WBS), task type, productivity rate, estimated cost, actual cost, duration, task dependencies, scope change magnitude, and scope change timing. Multiple machine learning models are developed and evaluated to predict the impact of scope changes on project cost and schedule. These models include Linear Regression, Decision Tree, Ridge Regression, Random Forest, Gradient Boosting, and XGBoost. The dataset is split into training and testing sets, and the models are trained using the preprocessed data. Cross-validation techniques are employed to assess the robustness and generalization ability of the models. The performance of the models is evaluated using metrics such as Mean Squared Error (MSE) and R-squared. Residual plots are generated to assess the goodness of fit and identify any patterns or outliers. Hyperparameter tuning is performed to optimize the XGBoost model and improve its predictive accuracy. The feature importance analysis reveals the relative significance of different project attributes in predicting the impact on cost and schedule. Key factors such as productivity rate, scope change magnitude, task dependencies, estimated cost, actual cost, duration, and specific WBS elements are identified as influential predictors. The study highlights the importance of considering both cost and schedule implications when managing scope changes. The developed predictive models provide project managers with a data-driven tool to proactively assess the potential impact of scope changes on project cost and schedule. By leveraging these insights, project managers can make informed decisions, optimize resource allocation, and develop effective mitigation strategies. The findings of this research contribute to improved project planning, risk management, and overall project success.Keywords: cost impact, machine learning, predictive modeling, schedule impact, scope changes
Procedia PDF Downloads 4516137 Utilization of Hybrid Teaching Methods to Improve Writing Skills of Undergraduate Students
Authors: Tahira Zaman
Abstract:
The paper intends to discover the utility of hybrid teaching methods to aid undergraduate students to improve their English academic writing skills. A total of 45 undergraduate students were selected randomly from three classes from varying language abilities, with the research design of monitoring and rubrics evaluation as a means of measure. Language skills of the students were upgraded with the help of experiential learning methods using reflective writing technique, guided method in which students were merely directed to correct form of writing techniques along with self-guided method for the students to produce a library research-based article measured through a standardized rubrics provided. The progress of the students was monitored and checked through rubrics and self-evaluation and concluded that a change was observed in the students’ writing abilities.Keywords: self evaluation, hybrid, self evaluation, reflective writing
Procedia PDF Downloads 16416136 Some Efficient Higher Order Iterative Schemes for Solving Nonlinear Systems
Authors: Sandeep Singh
Abstract:
In this article, two classes of iterative schemes are proposed for approximating solutions of nonlinear systems of equations whose orders of convergence are six and eight respectively. Sixth order scheme requires the evaluation of two vector-functions, two first Fr'echet derivatives and three matrices inversion per iteration. This three-step sixth-order method is further extended to eighth-order method which requires one more step and the evaluation of one extra vector-function. Moreover, computational efficiency is compared with some other recently published methods in which we found, our methods are more efficient than existing numerical methods for higher and medium size nonlinear system of equations. Numerical tests are performed to validate the proposed schemes.Keywords: Nonlinear systems, Computational complexity, order of convergence, Jarratt-type scheme
Procedia PDF Downloads 14016135 Numerical Investigation of Fluid Outflow through a Retinal Hole after Scleral Buckling
Authors: T. Walczak, J. K. Grabski, P. Fritzkowski, M. Stopa
Abstract:
Objectives of the study are i) to perform numerical simulations that permit an analysis of the dynamics of subretinal fluid when an implant has induced scleral intussusception and ii) assess the impact of the physical parameters of the model on the flow rate. Computer simulations were created using finite element method (FEM) based on a model that takes into account the interaction of a viscous fluid (subretinal fluid) with a hyperelastic body (retina). The purpose of the calculation was to investigate the dependence of the flow rate of subretinal fluid through a hole in the retina on different factors such as viscosity of subretinal fluid, material parameters of the retina, and the offset of the implant from the retina’s hole. These simulations were performed for different speeds of eye movement that reflect the behavior of the eye when reading, REM, and saccadic movements. Similar to other works in the field of subretinal fluid flow, it was assumed stationary, single sided, forced fluid flow in the considered area simulating the subretinal space. Additionally, a hyperelastic material model of the retina and parameterized geometry of the considered model was adopted. The calculations also examined the influence the direction of the force of gravity due to the position of the patient’s head on the trend of outflow of fluid. The simulations revealed that fluid outflow from the retina becomes significant with eyeball movement speed of 100°/sec. This speed is greater than in the case of reading but is four times less than saccadic movement. The increase of viscosity of the fluid increased beneficial effect. Further, the simulation results suggest that moderate eye movement speed is optimal and that the conventional prescription of the avoidance of routine eye movement following retinal detachment surgery should be relaxed. Additionally, to verify numerical results, some calculations were repeated with use of meshless method (method of fundamental solutions), which is relatively fast and easy to implement. The paper has been supported by 02/21/DSPB/3477 grant.Keywords: CFD simulations, FEM analysis, meshless method, retinal detachment
Procedia PDF Downloads 34416134 Structural Analysis of Kamaluddin Behzad's Works Based on Roland Barthes' Theory of Communication, 'Text and Image'
Authors: Mahsa Khani Oushani, Mohammad Kazem Hasanvand
Abstract:
Text and image have always been two important components in Iranian layout. The interactive connection between text and image has shaped the art of book design with multiple patterns. In this research, first the structure and visual elements in the research data were analyzed and then the position of the text element and the image element in relation to each other based on Roland Barthes theory on the three theories of text and image, were studied and analyzed and the results were compared, and interpreted. The purpose of this study is to investigate the pattern of text and image in the works of Kamaluddin Behzad based on three Roland Barthes communication theories, 1. Descriptive communication, 2. Reference communication, 3. Matched communication. The questions of this research are what is the relationship between text and image in Behzad's works? And how is it defined according to Roland Barthes theory? The method of this research has been done with a structuralist approach with a descriptive-analytical method in a library collection method. The information has been collected in the form of documents (library) and is a tool for collecting online databases. Findings show that the dominant element in Behzad's drawings is with the image and has created a reference relationship in the layout of the drawings, but in some cases it achieves a different relationship that despite the preference of the image on the page, the text is dispersed proportionally on the page and plays a more active role, played within the image. The text and the image support each other equally on the page; Roland Barthes equates this connection.Keywords: text, image, Kamaluddin Behzad, Roland Barthes, communication theory
Procedia PDF Downloads 19416133 Structural and Magnetic Properties of CoFe2-xNdxO4 Spinel Ferrite Nanoparticles
Authors: R. S. Yadav, J. Havlica, I. Kuřitka, Z. Kozakova, J. Masilko, M. Hajdúchová, V. Enev, J. Wasserbauer
Abstract:
In this present work, CoFe2-xNdxO4 (0.0 ≤ x ≥0.1) spinel ferrite nanoparticles were synthesized by starch-assisted sol-gel auto-combustion method. Powder X-ray diffraction patterns were revealed the formation of cubic spinel ferrite with the signature of NdFeO3 phase at higher Nd3+ concentration. The field emission scanning electron microscopy study demonstrated the spherical nanoparticle in the size range between 5-15 nm. Raman and Fourier Transform Infrared spectra supported the formation of the spinel ferrite structure in the nanocrystalline form. The X-ray photoelectron spectroscopy (XPS) analysis confirmed the presence of Co2+ and Fe3+ at octahedral as well as a tetrahedral site in CoFe2-xNdxO4 nanoparticles. The change in magnetic properties with a variation of concentration of Nd3+ ions in cobalt ferrite nanoparticles was observed.Keywords: nanoparticles, spinel ferrites, sol-gel auto-combustion method, CoFe2-xNdxO4
Procedia PDF Downloads 50216132 Integrations of Students' Learning Achievements and Their Analytical Thinking Abilities with the Problem-Based Learning and the Concept Mapping Instructional Methods on Gene and Chromosome Issue at the 12th Grade Level
Authors: Waraporn Thaimit, Yuwadee Insamran, Natchanok Jansawang
Abstract:
Focusing on Analytical Thinking and Learning Achievement are the critical component of visual thinking that gives one the ability to solve problems quickly and effectively that allows to complex problems into components, and the result had been achieved or acquired form of the subject students of which resulted in changes within the individual as a result of activity in learning. The aims of this study are to administer on comparisons between students’ analytical thinking abilities and their learning achievements sample size consisted of 80 students who sat at the 12th grade level in 2 classes from Chaturaphak Phiman Ratchadaphisek School, the 40-student experimental group with the Problem-Based Learning (PBL) and 40-student controlling group with the Concept Mapping Instructional (CMI) methods were designed. Research instruments composed with the 5-lesson instructional plans to be assessed with the pretest and posttest techniques on each instructional method. Students’ responses of their analytical thinking abilities were assessed with the Analytical Thinking Tests and students’ learning achievements were tested of the Learning Achievement Tests. Statistically significant differences with the paired t-test and F-test (Two-way MANCOVA) between post- and pre-tests of the whole students in two chemistry classes were found. Associations between student learning outcomes in each instructional method and their analytical thinking abilities to their learning achievements also were found (ρ < .05). The use of two instructional methods for this study is revealed that the students perceive their abilities to be highly learning achievement in chemistry classes with the PBL group ought to higher than the CMI group. Suggestions that analytical thinking ability involves the process of gathering relevant information and identifying key issues related to the learning achievement information.Keywords: comparisons, students learning achievements, analytical thinking abilities, the problem-based learning method, the concept mapping instructional method, gene and chromosome issue, chemistry classes
Procedia PDF Downloads 26316131 Experimental Performance and Numerical Simulation of Double Glass Wall
Authors: Thana Ananacha
Abstract:
This paper reports the numerical and experimental performances of Double Glass Wall are investigated. Two configurations were considered namely, the Double Clear Glass Wall (DCGW) and the Double Translucent Glass Wall (DTGW). The coupled governing equations as well as boundary conditions are solved using the finite element method (FEM) via COMSOLTM Multiphysics. Temperature profiles and flow field of the DCGW and DTGW are reported and discussed. Different constant heat fluxes were considered namely 400 and 800 W.m-2 the corresponding initial condition temperatures were to 30.5 and 38.5 ºC respectively. The results show that the simulation results are in agreement with the experimental data. Conclusively, the model considered in this study could reasonable be used simulate the thermal and ventilation performance of the DCGW and DTGW configurations.Keywords: thermal simulation, Double Glass Wall, velocity field, finite element method (FEM)
Procedia PDF Downloads 36216130 Simulation of Nonlinear Behavior of Reinforced Concrete Slabs Using Rigid Body-Spring Discrete Element Method
Authors: Felix Jr. Garde, Eric Augustus Tingatinga
Abstract:
Most analysis procedures of reinforced concrete (RC) slabs are based on elastic theory. When subjected to large forces, however, slabs deform beyond elastic range and the study of their behavior and performance require nonlinear analysis. This paper presents a numerical model to simulate nonlinear behavior of RC slabs using rigid body-spring discrete element method. The proposed slab model composed of rigid plate elements and nonlinear springs is based on the yield line theory which assumes that the nonlinear behavior of the RC slab subjected to transverse loads is contained in plastic or yield-lines. In this model, the displacement of the slab is completely described by the rigid elements and the deformation energy is concentrated in the flexural springs uniformly distributed at the potential yield lines. The spring parameters are determined from comparison of transverse displacements and stresses developed in the slab obtained using FEM and the proposed model with assumed homogeneous material. Numerical models of typical RC slabs with varying geometry, reinforcement, support conditions, and loading conditions, show reasonable agreement with available experimental data. The model was also shown to be useful in investigating dynamic behavior of slabs.Keywords: RC slab, nonlinear behavior, yield line theory, rigid body-spring discrete element method
Procedia PDF Downloads 32616129 Investigating the Effect of Groundwater Level on Nailing Arrangement in Excavation Stability
Authors: G. Khamooshian, A. Abbasimoshaei
Abstract:
Different methods are used to stabilize the sticks, among which the method of knitting is commonly used. In recent years, the use of nailing for the stability of excavation has been considered much, which is providing sufficient stability and controlling the structural defects of the guardian, also reduces the cost of the operation. In addition, this method is more prominent in deep excavations than other methods. The purpose of this paper is to investigate the effect of groundwater level and soil type on the length and designing of nails. In this paper, analysis and modeling for vertical arena with constant depth and different levels of groundwater have been done. Also, by changing the soil resistance parameters and design of the nails, an optimum arrangement was made and the effect of changes in groundwater level and soil's type on the design of the nails, the maximum axial force mobilized in the nails and the confidence coefficient for the stability of the groove was examined.Keywords: excavation, soil effects, nailing, hole analyzing
Procedia PDF Downloads 18616128 A Source Point Distribution Scheme for Wave-Body Interaction Problem
Authors: Aichun Feng, Zhi-Min Chen, Jing Tang Xing
Abstract:
A two-dimensional linear wave-body interaction problem can be solved using a desingularized integral method by placing free surface Rankine sources over calm water surface and satisfying boundary conditions at prescribed collocation points on the calm water surface. A new free-surface Rankine source distribution scheme, determined by the intersection points of free surface and body surface, is developed to reduce numerical computation cost. Associated with this, a new treatment is given to the intersection point. The present scheme results are in good agreement with traditional numerical results and measurements.Keywords: source point distribution, panel method, Rankine source, desingularized algorithm
Procedia PDF Downloads 36516127 The Impact of Using Microlearning to Enhance Students' Programming Skills and Learning Motivation
Authors: Ali Alqarni
Abstract:
This study aims to explore the impact of microlearning on the development of the programming skills as well as on the motivation for learning of first-year high schoolers in Jeddah. The sample consists of 78 students, distributed as 40 students in the control group, and 38 students in the treatment group. The quasi-experimental method, which is a type of quantitative method, was used in this study. In addition to the technological tools used to create and deliver the digital content, the study utilized two tools to collect the data: first, an observation card containing a list of programming skills, and second, a tool to measure the student's motivation for learning. The findings indicate that microlearning positively impacts programming skills and learning motivation for students. The study, then, recommends implementing and expanding the use of microlearning in educational contexts both in the general education level and the higher education level.Keywords: educational technology, teaching strategies, online learning, microlearning
Procedia PDF Downloads 13316126 Investigating the Motion of a Viscous Droplet in Natural Convection Using the Level Set Method
Authors: Isadora Bugarin, Taygoara F. de Oliveira
Abstract:
Binary fluids and emulsions, in general, are present in a vast range of industrial, medical, and scientific applications, showing complex behaviors responsible for defining the flow dynamics and the system operation. However, the literature describing those highlighted fluids in non-isothermal models is currently still limited. The present work brings a detailed investigation on droplet migration due to natural convection in square enclosure, aiming to clarify the effects of drop viscosity on the flow dynamics by showing how distinct viscosity ratios (droplet/ambient fluid) influence the drop motion and the final movement pattern kept on stationary regimes. The analysis was taken by observing distinct combinations of Rayleigh number, drop initial position, and viscosity ratios. The Navier-Stokes and Energy equations were solved considering the Boussinesq approximation in a laminar flow using the finite differences method combined with the Level Set method for binary flow solution. Previous results collected by the authors showed that the Rayleigh number and the drop initial position affect drastically the motion pattern of the droplet. For Ra ≥ 10⁴, two very marked behaviors were observed accordingly with the initial position: the drop can travel either a helical path towards the center or a cyclic circular path resulting in a closed cycle on the stationary regime. The variation of viscosity ratio showed a significant alteration of pattern, exposing a large influence on the droplet path, capable of modifying the flow’s behavior. Analyses on viscosity effects on the flow’s unsteady Nusselt number were also performed. Among the relevant contributions proposed in this work is the potential use of the flow initial conditions as a mechanism to control the droplet migration inside the enclosure.Keywords: binary fluids, droplet motion, level set method, natural convection, viscosity
Procedia PDF Downloads 12216125 Sound Insulation between Buildings: The Impact Noise Transmission through Different Floor Configurations
Authors: Abdelouahab Bouttout, Mohamed Amara
Abstract:
The present paper examines the impact noise transmission through some floor building assemblies. The Acoubat software numerical simulation has been used to simulate the impact noise transmission through different floor configurations used in Algerian construction mode. The results are compared with the available measurements. We have developed two experimental methods, i) field method, and ii) laboratory method using Brüel and Kjær equipments. The results show that the different cases of floor configurations need some improvement to ensure the acoustic comfort in the receiving apartment. The recommended value of the impact sound level in the receiving room should not exceed 58 dB. The important results obtained in this paper can be used as platform to improve the Algerian building acoustic regulation aimed at the construction of the multi-storey residential building.Keywords: impact noise, building acoustic, floor insulation, resilient material
Procedia PDF Downloads 37516124 The Effect of Additives on Characterization and Photocatalytic Activity of Ag-TiO₂ Nanocomposite Prepared via Sol-Gel Process
Authors: S. Raeis Farshid, B. Raeis Farshid
Abstract:
Ag-TiO₂ nanocomposites were prepared by the sol-gel method with and without additives such as carboxy methyl cellulose (CMC), polyethylene glycol (PEG), polyvinyl pyrrolidone (PVP), and hydroxyl propyl cellulose (HPC). The characteristics of the prepared Ag-TiO₂ nanocomposites were identified by Fourier Transform Infra-Red spectroscopy (FTIR), X-Ray Diffraction (XRD), and scanning electron microscopy (SEM) methods. The additives have a significant effect on the particle size distribution and photocatalytic activity of Ag-TiO₂ nanocomposites. SEM images have shown that the particle size distribution of Ag-TiO₂ nanocomposite in the presence of HPC was the best in comparison to the other samples. The photocatalytic activity of the synthesized nanocomposites was investigated for decolorization of methyl orange (MO) in water under UV-irradiation in a batch reactor, and the results showed that the photocatalytic activity of the nanocomposites had been increased by CMC, PEG, PVP, and HPC, respectively.Keywords: sol-gel method, Ag-TiO₂, decolorization, photocatalyst, nanocomposite
Procedia PDF Downloads 8216123 Lateral Torsional Buckling: Tests on Glued Laminated Timber Beams
Authors: Vera Wilden, Benno Hoffmeister, Markus Feldmann
Abstract:
Glued laminated timber (glulam) is a preferred choice for long span girders, e.g., for gyms or storage halls. While the material provides sufficient strength to resist the bending moments, large spans lead to increased slenderness of such members and to a higher susceptibility to stability issues, in particular to lateral torsional buckling (LTB). Rules for the determination of the ultimate LTB resistance are provided by Eurocode 5. The verifications of the resistance may be performed using the so called equivalent member method or by means of theory 2nd order calculations (direct method), considering equivalent imperfections. Both methods have significant limitations concerning their applicability; the equivalent member method is limited to rather simple cases; the direct method is missing detailed provisions regarding imperfections and requirements for numerical modeling. In this paper, the results of a test series on slender glulam beams in three- and four-point bending are presented. The tests were performed in an innovative, newly developed testing rig, allowing for a very precise definition of loading and boundary conditions. The load was introduced by a hydraulic jack, which follows the lateral deformation of the beam by means of a servo-controller, coupled with the tested member and keeping the load direction vertically. The deformation-controlled tests allowed for the identification of the ultimate limit state (governed by elastic stability) and the corresponding deformations. Prior to the tests, the structural and geometrical imperfections were determined and used later in the numerical models. After the stability tests, the nearly undamaged members were tested again in pure bending until reaching the ultimate moment resistance of the cross-section. These results, accompanied by numerical studies, were compared to resistance values obtained using both methods according to Eurocode 5.Keywords: experimental tests, glued laminated timber, lateral torsional buckling, numerical simulation
Procedia PDF Downloads 24116122 Increasing the Speed of the Apriori Algorithm by Dimension Reduction
Authors: A. Abyar, R. Khavarzadeh
Abstract:
The most basic and important decision-making tool for industrial and service managers is understanding the market and customer behavior. In this regard, the Apriori algorithm, as one of the well-known machine learning methods, is used to identify customer preferences. On the other hand, with the increasing diversity of goods and services and the speed of changing customer behavior, we are faced with big data. Also, due to the large number of competitors and changing customer behavior, there is an urgent need for continuous analysis of this big data. While the speed of the Apriori algorithm decreases with increasing data volume. In this paper, the big data PCA method is used to reduce the dimension of the data in order to increase the speed of Apriori algorithm. Then, in the simulation section, the results are examined by generating data with different volumes and different diversity. The results show that when using this method, the speed of the a priori algorithm increases significantly.Keywords: association rules, Apriori algorithm, big data, big data PCA, market basket analysis
Procedia PDF Downloads 916121 Quantitative Method of Measurement for the Rights and Obligations of Contracting Parties in Standard Forms of Contract in Malaysia: A Case Study
Authors: Sim Nee Ting, Lan Eng Ng
Abstract:
Standard forms of contract in Malaysia are pre-written, printed contractual documents drafted by recognised authoritative bodies in order to describe the rights and obligations of the contracting parties in all construction projects in Malaysia. Studies and form revisions are usually conducted in a relatively random and qualitative manner, but the search of contractual documents idealization remains. It is not clear how these qualitative findings could be helpful for contractual documents improvements and re-drafting. This study aims to quantitatively and systematically analyse and evaluate the rights and obligations of the contracting parties as stated in the standard forms of contract. The Institution of Engineers Malaysia (IEM) published a new standard form of contract in 2012 with a total of 63 classes but the improvements and changes in the newly revised form that are yet to be analysed. IEM form will be used as the case study for this study. Every clause in this said form were interpreted and analysed according to the involved parties including contractor, engineer and employer. Modified from Matrix Method and Likert Scale, the result analysis were conducted based on a scale from 0 to 1 with five ratings namely “Very Unbalance”, “Unbalance”, “Balance”, “Good Balance” and “Very Good Balance”. It is hoped that quantitative method of form study can be used for future form revisions and any new forms drafting so to reduce on any subjectivity in standard forms of contract studies.Keywords: contracting parties, Malaysia, obligations, quantitative measurement, rights, standard form of contract
Procedia PDF Downloads 26716120 Simulation of Non-Crimp 3D Orthogonal Carbon Fabric Composite for Aerospace Applications Using Finite Element Method
Authors: Sh. Minapoor, S. Ajeli, M. Javadi Toghchi
Abstract:
Non-crimp 3D orthogonal fabric composite is one of the textile-based composite materials that are rapidly developing light-weight engineering materials. The present paper focuses on geometric and micro mechanical modeling of non-crimp 3D orthogonal carbon fabric and composites reinforced with it for aerospace applications. In this research meso-finite element (FE) modeling employs for stress analysis in different load conditions. Since mechanical testing of expensive textile carbon composites with specific application isn't affordable, simulation composite in a virtual environment is a helpful way to investigate its mechanical properties in different conditions.Keywords: woven composite, aerospace applications, finite element method, mechanical properties
Procedia PDF Downloads 46516119 Effect of Anion and Amino Functional Group on Resin for Lipase Immobilization with Adsorption-Cross Linking Method
Authors: Heri Hermansyah, Annisa Kurnia, A. Vania Anisya, Adi Surjosatyo, Yopi Sunarya, Rita Arbianti, Tania Surya Utami
Abstract:
Lipase is one of biocatalyst which is applied commercially for the process in industries, such as bioenergy, food, and pharmaceutical industry. Nowadays, biocatalysts are preferred in industries because they work in mild condition, high specificity, and reduce energy consumption (high pressure and temperature). But, the usage of lipase for industry scale is limited by economic reason due to the high price of lipase and difficulty of the separation system. Immobilization of lipase is one of the solutions to maintain the activity of lipase and reduce separation system in the process. Therefore, we conduct a study about lipase immobilization with the adsorption-cross linking method using glutaraldehyde because this method produces high enzyme loading and stability. Lipase is immobilized on different kind of resin with the various functional group. Highest enzyme loading (76.69%) was achieved by lipase immobilized on anion macroporous which have anion functional group (OH‑). However, highest activity (24,69 U/g support) through olive oil emulsion method was achieved by lipase immobilized on anion macroporous-chitosan which have amino (NH2) and anion (OH-) functional group. In addition, it also success to produce biodiesel until reach yield 50,6% through interesterification reaction and after 4 cycles stable 63.9% relative with initial yield. While for Aspergillus, niger lipase immobilized on anion macroporous-kitosan have unit activity 22,84 U/g resin and yield biodiesel higher than commercial lipase (69,1%) and after 4 cycles stable reach 70.6% relative from initial yield. This shows that optimum functional group on support for immobilization with adsorption-cross linking is the support that contains amino (NH2) and anion (OH-) functional group because they can react with glutaraldehyde and binding with enzyme prevent desorption of lipase from support through binding lipase with a functional group on support.Keywords: adsorption-cross linking, immobilization, lipase, resin
Procedia PDF Downloads 37116118 Prediction of Conducted EMI Noise in a Converter
Abstract:
Due to higher switching frequencies, the conducted Electromagnetic interference (EMI) noise is generated in a converter. It degrades the performance of a switching converter. Therefore, it is an essential requirement to mitigate EMI noise of high performance converter. Moreover, it includes two types of emission such as common mode (CM) and differential mode (DM) noise. CM noise is due to parasitic capacitance present in a converter and DM noise is caused by switching current. However, there is dire need to understand the main cause of EMI noise. Hence, we propose a novel method to predict conducted EMI noise of different converter topologies during early stage. This paper also presents the comparison of conducted electromagnetic interference (EMI) noise due to different SMPS topologies. We also make an attempt to develop an EMI noise model for a converter which allows detailed performance analysis. The proposed method is applied to different converter, as an example, and experimental results are verified the novel prediction technique.Keywords: EMI, electromagnetic interference, SMPS, switch-mode power supply, common mode, CM, differential mode, DM, noise
Procedia PDF Downloads 121316117 Modeling of Cf-252 and PuBe Neutron Sources by Monte Carlo Method in Order to Develop Innovative BNCT Therapy
Authors: Marta Błażkiewicz, Adam Konefał
Abstract:
Currently, boron-neutron therapy is carried out mainly with the use of a neutron beam generated in research nuclear reactors. This fact limits the possibility of realization of a BNCT in centers distant from the above-mentioned reactors. Moreover, the number of active nuclear reactors in operation in the world is decreasing due to the limited lifetime of their operation and the lack of new installations. Therefore, the possibilities of carrying out boron-neutron therapy based on the neutron beam from the experimental reactor are shrinking. However, the use of nuclear power reactors for BNCT purposes is impossible due to the infrastructure not intended for radiotherapy. Therefore, a serious challenge is to find ways to perform boron-neutron therapy based on neutrons generated outside the research nuclear reactor. This work meets this challenge. Its goal is to develop a BNCT technique based on commonly available neutron sources such as Cf-252 and PuBe, which will enable the above-mentioned therapy in medical centers unrelated to nuclear research reactors. Advances in the field of neutron source fabrication make it possible to achieve strong neutron fluxes. The current stage of research focuses on the development of virtual models of the above-mentioned sources using the Monte Carlo simulation method. In this study, the GEANT4 tool was used, including the model for simulating neutron-matter interactions - High Precision Neutron. Models of neutron sources were developed on the basis of experimental verification based on the activation detectors method with the use of indium foil and the cadmium differentiation method allowing to separate the indium activation contribution from thermal and resonance neutrons. Due to the large number of factors affecting the result of the verification experiment, the 10% discrepancy between the simulation and experiment results was accepted.Keywords: BNCT, virtual models, neutron sources, monte carlo, GEANT4, neutron activation detectors, gamma spectroscopy
Procedia PDF Downloads 188