Search results for: displacement discontinuity method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19465

Search results for: displacement discontinuity method

16015 Reliable Method for Estimating Rating Curves in the Natural Rivers

Authors: Arash Ahmadi, Amirreza Kavousizadeh, Sanaz Heidarzadeh

Abstract:

Stage-discharge curve is one of the conventional methods for continuous river flow measurement. In this paper, an innovative approach is proposed for predicting the stage-discharge relationship using the application of isovel contours. Using the proposed method, it is possible to estimate the stage-discharge curve in the whole section with only using discharge information from just one arbitrary water level. For this purpose, multivariate relationships are used to determine the mean velocity in a cross-section. The unknown exponents of the proposed relationship have been obtained by using the second version of the Strength Pareto Evolutionary Algorithm (SPEA2), and the appropriate equation was selected by applying the TOPSIS (Technique for Order Preferences by Similarity to an Ideal Solution) approach. Results showed a close agreement between the estimated and observed data in the different cross-sections.

Keywords: rating curves, SPEA2, natural rivers, bed roughness distribution

Procedia PDF Downloads 151
16014 Innovation in PhD Training in the Interdisciplinary Research Institute

Authors: B. Shaw, K. Doherty

Abstract:

The Cultural Communication and Computing Research Institute (C3RI) is a diverse multidisciplinary research institute including art, design, media production, communication studies, computing and engineering. Across these disciplines it can seem like there are enormous differences of research practice and convention, including differing positions on objectivity and subjectivity, certainty and evidence, and different political and ethical parameters. These differences sit within, often unacknowledged, histories, codes, and communication styles of specific disciplines, and it is all these aspects that can make understanding of research practice across disciplines difficult. To explore this, a one day event was orchestrated, testing how a PhD community might communicate and share research in progress in a multi-disciplinary context. Instead of presenting results at a conference, research students were tasked to articulate their method of inquiry. A working party of students from across disciplines had to design a conference call, visual identity and an event framework that would work for students across all disciplines. The process of establishing the shape and identity of the conference was revealing. Even finding a linguistic frame that would meet the expectations of different disciplines for the conference call was challenging. The first abstracts submitted either resorted to reporting findings, or only described method briefly. It took several weeks of supported intervention for research students to get ‘inside’ their method and to understand their research practice as a process rich with philosophical and practical decisions and implications. In response to the abstracts the conference committee generated key methodological categories for conference sessions, including sampling, capturing ‘experience’, ‘making models’, researcher identities, and ‘constructing data’. Each session involved presentations by visual artists, communications students and computing researchers with inter-disciplinary dialogue, facilitated by alumni Chairs. The apparently simple focus on method illuminated research process as a site of creativity, innovation and discovery, and also built epistemological awareness, drawing attention to what is being researched and how it can be known. It was surprisingly difficult to limit students to discussing method, and it was apparent that the vocabulary available for method is sometimes limited. However, by focusing on method rather than results, the genuine process of research, rather than one constructed for approval, could be captured. In unlocking the twists and turns of planning and implementing research, and the impact of circumstance and contingency, students had to reflect frankly on successes and failures. This level of self – and public- critique emphasised the degree of critical thinking and rigour required in executing research and demonstrated that honest reportage of research, faults and all, is good valid research. The process also revealed the degree that disciplines can learn from each other- the computing students gained insights from the sensitive social contextualizing generated by communications and art and design students, and art and design students gained understanding from the greater ‘distance’ and emphasis on application that computing students applied to their subjects. Finding the means to develop dialogue across disciplines makes researchers better equipped to devise and tackle research problems across disciplines, potentially laying the ground for more effective collaboration.

Keywords: interdisciplinary, method, research student, training

Procedia PDF Downloads 201
16013 Role of Pulp Volume Method in Assessment of Age and Gender in Lucknow, India, an Observational Study

Authors: Anurag Tripathi, Sanad Khandelwal

Abstract:

Age and gender determination are required in forensic for victim identification. There is secondary dentine deposition throughout life, resulting in decreased pulp volume and size. Evaluation of pulp volume using Cone Beam Computed Tomography (CBCT)is a noninvasive method to evaluate the age and gender of an individual. The study was done to evaluate the efficacy of pulp volume method in the determination of age and gender.Aims/Objectives: The study was conducted to estimate age and determine sex by measuring tooth pulp volume with the help of CBCT. An observational study of one year duration on CBCT data of individuals was conducted in Lucknow. Maxillary central incisors (CI) and maxillary canine (C) of the randomly selected samples were assessed for measurement of pulp volume using a software. Statistical analysis: Chi Square Test, Arithmetic Mean, Standard deviation, Pearson’s Correlation, Linear & Logistic regression analysis. Results: The CBCT data of Ninety individuals with age range between 18-70 years was evaluated for pulp volume of central incisor and canine (CI & C). The Pearson correlation coefficient between the tooth pulp volume (CI & C) and chronological age suggested that pulp volume decreased with age. The validation of the equations for sex determination showed higher prediction accuracy for CI (56.70%) and lower for C (53.30%).Conclusion: Pulp volume obtained from CBCT is a reliable indicator for age estimation and gender prediction.

Keywords: forensic, dental age, pulp volume, cone beam computed tomography

Procedia PDF Downloads 94
16012 Study of Dispersion of Silica and Chitosan Nanoparticles into Gelatin Film

Authors: Mohit Batra, Noel Sarkar, Jayeeta Mitra

Abstract:

In this study silica nanoparticles were synthesized using different methods and different silica sources namely Tetraethyl ortho silicate (TEOS), Sodium Silicate, Rice husk while chitosan nanoparticles were prepared with ionic gelation method using Sodium tripolyphosphate (TPP). Size and texture of silica nanoparticles were studied using field emission scanning electron microscopy (FESEM) and transmission electron microscopy (TEM) along with the effect of change in concentration of various reagents in different synthesis processes. Size and dispersion of Silica nanoparticles prepared from TEOS using stobber’s method were found better than other methods while nanoparticles prepared using rice husk were cheaper than other ones. Catalyst found to play a very significant role in controlling the size of nanoparticles in all methods.

Keywords: silica nanoparticles, gelatin, bio-nanocomposites, SEM, TEM, chitosan

Procedia PDF Downloads 310
16011 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 411
16010 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 216
16009 The Nonlinear Dynamic Response of a Rotor System Supported by Hydrodynamic Journal Bearings

Authors: Amira Amamou, Mnaouar Chouchane

Abstract:

This paper investigates the bifurcation and nonlinear behavior of two degrees of freedom model of a symmetrical balanced rigid rotor supported by two identical journal bearings. The fluid film hydrodynamic reactions are modeled by applying both the short and the long bearing approximation and using half Sommerfeld solution. A numerical integration of equations of the journal centre motion is presented to predict the presence and the size of stable or unstable limit cycles in the neighborhood of the stability critical speed. For their stability margins, a continuation method based on the predictor-corrector mechanism is used. The numerical results of responses show that stability and bifurcation behaviors of periodic motions depend strongly on bearing parameters and its dynamic characteristics.

Keywords: hydrodynamic journal bearing, nonlinear stability, continuation method, bifurcations

Procedia PDF Downloads 404
16008 Evaluation of Academic Research Projects Using the AHP and TOPSIS Methods

Authors: Murat Arıbaş, Uğur Özcan

Abstract:

Due to the increasing number of universities and academics, the fund of the universities for research activities and grants/supports given by government institutions have increased number and quality of academic research projects. Although every academic research project has a specific purpose and importance, limited resources (money, time, manpower etc.) require choosing the best ones from all (Amiri, 2010). It is a pretty hard process to compare and determine which project is better such that the projects serve different purposes. In addition, the evaluation process has become complicated since there are more than one evaluator and multiple criteria for the evaluation (Dodangeh, Mojahed and Yusuff, 2009). Mehrez and Sinuany-Stern (1983) determined project selection problem as a Multi Criteria Decision Making (MCDM) problem. If a decision problem involves multiple criteria and objectives, it is called as a Multi Attribute Decision Making problem (Ömürbek & Kınay, 2013). There are many MCDM methods in the literature for the solution of such problems. These methods are AHP (Analytic Hierarchy Process), ANP (Analytic Network Process), TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation), UTADIS (Utilities Additives Discriminantes), ELECTRE (Elimination et Choix Traduisant la Realite), MAUT (Multiattribute Utility Theory), GRA (Grey Relational Analysis) etc. Teach method has some advantages compared with others (Ömürbek, Blacksmith & Akalın, 2013). Hence, to decide which MCDM method will be used for solution of the problem, factors like the nature of the problem, types of choices, measurement scales, type of uncertainty, dependency among the attributes, expectations of decision maker, and quantity and quality of the data should be considered (Tavana & Hatami-Marbini, 2011). By this study, it is aimed to develop a systematic decision process for the grant support applications that are expected to be evaluated according to their scientific adequacy by multiple evaluators under certain criteria. In this context, project evaluation process applied by The Scientific and Technological Research Council of Turkey (TÜBİTAK) the leading institutions in our country, was investigated. Firstly in the study, criteria that will be used on the project evaluation were decided. The main criteria were selected among TÜBİTAK evaluation criteria. These criteria were originality of project, methodology, project management/team and research opportunities and extensive impact of project. Moreover, for each main criteria, 2-4 sub criteria were defined, hence it was decided to evaluate projects over 13 sub-criterion in total. Due to superiority of determination criteria weights AHP method and provided opportunity ranking great number of alternatives TOPSIS method, they are used together. AHP method, developed by Saaty (1977), is based on selection by pairwise comparisons. Because of its simple structure and being easy to understand, AHP is the very popular method in the literature for determining criteria weights in MCDM problems. Besides, the TOPSIS method developed by Hwang and Yoon (1981) as a MCDM technique is an alternative to ELECTRE method and it is used in many areas. In the method, distance from each decision point to ideal and to negative ideal solution point was calculated by using Euclidian Distance Approach. In the study, main criteria and sub-criteria were compared on their own merits by using questionnaires that were developed based on an importance scale by four relative groups of people (i.e. TUBITAK specialists, TUBITAK managers, academics and individuals from business world ) After these pairwise comparisons, weight of the each main criteria and sub-criteria were calculated by using AHP method. Then these calculated criteria’ weights used as an input in TOPSİS method, a sample consisting 200 projects were ranked on their own merits. This new system supported to opportunity to get views of the people that take part of project process including preparation, evaluation and implementation on the evaluation of academic research projects. Moreover, instead of using four main criteria in equal weight to evaluate projects, by using weighted 13 sub-criteria and decision point’s distance from the ideal solution, systematic decision making process was developed. By this evaluation process, new approach was created to determine importance of academic research projects.

Keywords: Academic projects, Ahp method, Research projects evaluation, Topsis method.

Procedia PDF Downloads 587
16007 Simplifying Seismic Vulnerability Analysis for Existing Reinforced Concrete Buildings

Authors: Maryam Solgi, Behzad Shahmohammadi, Morteza Raissi Dehkordi

Abstract:

One of the main steps for seismic retrofitting of buildings is to determine the vulnerability of structures. While current procedures for evaluating existing buildings are complicated, and there is no limitation between short, middle-high, and tall buildings. This research utilizes a simplified method for assessing structures, which is adequate for existing reinforced concrete buildings. To approach this aim, Simple Lateral Mechanisms Analysis (SLaMA) procedure proposed by NZSEE (New Zealand Society for Earthquake Engineering) has been carried out. In this study, three RC moment-resisting frame buildings are determined. First, these buildings have been evaluated by inelastic static procedure (Pushover) based on acceptance criteria. Then, Park-Ang Damage Index is determined for the whole members of each building by Inelastic Time History Analysis. Next, the Simple Lateral Mechanisms Analysis procedure, a hand method, is carried out to define the capacity of structures. Ultimately, existing procedures are compared with Peak Ground Acceleration caused to fail (PGAfail). The results of this comparison emphasize that the Pushover procedure and SLaMA method define a greater value of PGAfail than the Park-Ang Damage model.

Keywords: peak ground acceleration caused to fail, reinforced concrete moment-frame buildings, seismic vulnerability analysis, simple lateral mechanisms analysis

Procedia PDF Downloads 89
16006 Empirical Mode Decomposition Based Denoising by Customized Thresholding

Authors: Wahiba Mohguen, Raïs El’hadi Bekka

Abstract:

This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).

Keywords: customized thresholding, ECG signal, EMD, hard thresholding, soft-thresholding

Procedia PDF Downloads 297
16005 The Attitudinal Effects of Dental Hygiene Students When Changing Conventional Practices of Preventive Therapy in the Dental Hygiene Curriculum

Authors: Shawna Staud, Mary Kaye Scaramucci

Abstract:

Objective: Rubber cup polishing has been a traditional method of preventative therapy in dental hygiene treatment. Newer methods such as air polishing have changed the way dental hygiene care is provided, yet this technique has not been embraced by students in the program nor by practitioners in the workforce. Students entering the workforce tend to follow office protocol and are limited in confidence to introduce technologies learned in the curriculum. This project was designed to help students gain confidence in newer skills and encourage private practice settings to adopt newer technologies for patient care. Our program recently introduced air polishing earlier in the program before the rubber cup technique to determine if students would embrace the technology to become leading-edge professionals when they enter the marketplace. Methods: The class of 2022 was taught the traditional method of polishing in the first-year curriculum and air polishing in the second-year curriculum. The class of 2023 will be taught the air polishing method in the first-year curriculum and the traditional method of polishing in the second-year curriculum. Pre- and post-graduation survey data will be collected from both cohorts. Descriptive statistics and pre and post-paired t-tests with alpha set at .05 to compare pre and post-survey results will be used to assess data. Results: This study is currently in progress, with a completion date of October 2023. The class of 2022 completed the pre-graduation survey in the spring of 2022. The post-gradation survey will be sent out in October 2022. The class of 2023 cohort will be surveyed in the spring of 2023 and October 2023. Conclusion: Our hypothesis is students who are taught air polishing first will be more inclined to adopt that skill in private practice, thereby embracing newer technology and improving oral health care.

Keywords: luggage handling system at world’s largest pilgrimage center

Procedia PDF Downloads 97
16004 A Nonlinear Feature Selection Method for Hyperspectral Image Classification

Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo

Abstract:

For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.

Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine

Procedia PDF Downloads 259
16003 A Method to Assess Aspect of Sustainable Development: Walkability

Authors: Amna Ali Al-Saadi, Riken Homma, Kazuhisa Iki

Abstract:

Despite the fact that many places have successes in achieving some aspects of sustainable urban development, there are no scientific facts to convince decision makers. Also, each of them was developed to fulfill the need of specific city only. Therefore, objective method to generate the solutions from a successful case is the aim of this research. The questions were: how to learn the lesson from each case study; how to distinguish the potential criteria and negative one; snd how to quantify their effects in the future development. Walkability has been selected as a goal. This is because it has been found as a solution to achieve healthy life style as well as social, environmental and economic sustainability. Moreover, it has complication as every aspect of sustainable development. This research is stand on quantitative- comparative methodology in order to assess pedestrian oriented development. Three analyzed area (AAs) were selected. One site is located in Oman in which hypotheses as motorized oriented development, while two sites are in Japan where the development is pedestrian friendly. The study used Multi- criteria evaluation method (MCEM). Initially, MCEM stands on analytic hierarchy process (AHP). The later was structured into main goal (walkability), objectives (functions and layout) and attributes (the urban form criteria). Secondly, the GIS were used to evaluate the attributes in multi-criteria maps. Since each criterion has different scale of measurement, all results were standardized by z-score and used to measure the co-relations among criteria. As results, different scenario was generated from each AA. MCEM (AHP-OWA)-GIS measured the walkability score and determined the priority of criteria development in the non-walker friendly environment. The comparison criteria for z-score presented a measurable distinguished orientation of development. This result has been used to prove that Oman is motorized environment while Japan is walkable. Also, it defined the powerful criteria and week criteria regardless to the AA. This result has been used to generalize the priority for walkable development. In conclusion, the method was found successful in generate scientific base for policy decisions.

Keywords: walkability, policy decisions, sustainable development, GIS

Procedia PDF Downloads 435
16002 Utilization of Hybrid Teaching Methods to Improve Writing Skills of Undergraduate Students

Authors: Tahira Zaman

Abstract:

The paper intends to discover the utility of hybrid teaching methods to aid undergraduate students to improve their English academic writing skills. A total of 45 undergraduate students were selected randomly from three classes from varying language abilities, with the research design of monitoring and rubrics evaluation as a means of measure. Language skills of the students were upgraded with the help of experiential learning methods using reflective writing technique, guided method in which students were merely directed to correct form of writing techniques along with self-guided method for the students to produce a library research-based article measured through a standardized rubrics provided. The progress of the students was monitored and checked through rubrics and self-evaluation and concluded that a change was observed in the students’ writing abilities.

Keywords: self evaluation, hybrid, self evaluation, reflective writing

Procedia PDF Downloads 159
16001 Some Efficient Higher Order Iterative Schemes for Solving Nonlinear Systems

Authors: Sandeep Singh

Abstract:

In this article, two classes of iterative schemes are proposed for approximating solutions of nonlinear systems of equations whose orders of convergence are six and eight respectively. Sixth order scheme requires the evaluation of two vector-functions, two first Fr'echet derivatives and three matrices inversion per iteration. This three-step sixth-order method is further extended to eighth-order method which requires one more step and the evaluation of one extra vector-function. Moreover, computational efficiency is compared with some other recently published methods in which we found, our methods are more efficient than existing numerical methods for higher and medium size nonlinear system of equations. Numerical tests are performed to validate the proposed schemes.

Keywords: Nonlinear systems, Computational complexity, order of convergence, Jarratt-type scheme

Procedia PDF Downloads 129
16000 Effects of Polymer Adsorption and Desorption on Polymer Flooding in Waterflooded Reservoir

Authors: Sukruthai Sapniwat, Falan Srisuriyachai

Abstract:

Polymer Flooding is one of the most well-known methods in Enhanced Oil Recovery (EOR) technology which can be implemented after either primary or secondary recovery, resulting in favorable conditions for the displacement mechanism in order to lower the residual oil in the reservoir. Polymer substances can lower the mobility ratio of the whole process by increasing the viscosity of injected water. Therefore, polymer flooding can increase volumetric sweep efficiency, which leads to a better recovery factor. Moreover, polymer adsorption onto rock surface can help decrease reservoir permeability contrast with high heterogeneity. Due to the reduction of the absolute permeability, effective permeability to water, representing flow ability of the injected fluid, is also reduced. Once polymer is adsorbed onto rock surface, polymer molecule can be desorbed when different fluids are injected. This study is performed to evaluate the effects of the adsorption and desorption process of polymer solutions to yield benefits on the oil recovery mechanism. A reservoir model is constructed by reservoir simulation program called STAR® commercialized by the Computer Modeling Group (CMG). Various polymer concentrations, starting times of polymer flooding process and polymer injection rates were evaluated with selected values of polymer desorption degrees including 0, 25, 50, 75 and 100%. The higher the value, the more adsorbed polymer molecules to return back to flowing fluid. According to the results, polymer desorption lowers polymer consumption, especially at low concentrations. Furthermore, starting time of polymer flooding and injection rate affect the oil production. The results show that waterflooding followed by earlier polymer flooding can increase the oil recovery factor while the higher injection rate also enhances the recovery. Polymer concentration is related to polymer consumption due to the two main benefits of polymer flooding control described above. Therefore, polymer slug size should be optimized based on polymer concentration. Polymer desorption causes polymer re-employment that is previously adsorbed onto rock surface, resulting in an increase of sweep efficiency in the further period of polymer flooding process. Even though waterflooding supports polymer injectivity, water cut at the producer can prematurely terminate the oil production. The injection rate decreases polymer adsorption due to decreased retention time of polymer flooding process.

Keywords: enhanced oil recovery technology, polymer adsorption and desorption, polymer flooding, reservoir simulation

Procedia PDF Downloads 322
15999 Numerical Investigation of Fluid Outflow through a Retinal Hole after Scleral Buckling

Authors: T. Walczak, J. K. Grabski, P. Fritzkowski, M. Stopa

Abstract:

Objectives of the study are i) to perform numerical simulations that permit an analysis of the dynamics of subretinal fluid when an implant has induced scleral intussusception and ii) assess the impact of the physical parameters of the model on the flow rate. Computer simulations were created using finite element method (FEM) based on a model that takes into account the interaction of a viscous fluid (subretinal fluid) with a hyperelastic body (retina). The purpose of the calculation was to investigate the dependence of the flow rate of subretinal fluid through a hole in the retina on different factors such as viscosity of subretinal fluid, material parameters of the retina, and the offset of the implant from the retina’s hole. These simulations were performed for different speeds of eye movement that reflect the behavior of the eye when reading, REM, and saccadic movements. Similar to other works in the field of subretinal fluid flow, it was assumed stationary, single sided, forced fluid flow in the considered area simulating the subretinal space. Additionally, a hyperelastic material model of the retina and parameterized geometry of the considered model was adopted. The calculations also examined the influence the direction of the force of gravity due to the position of the patient’s head on the trend of outflow of fluid. The simulations revealed that fluid outflow from the retina becomes significant with eyeball movement speed of 100°/sec. This speed is greater than in the case of reading but is four times less than saccadic movement. The increase of viscosity of the fluid increased beneficial effect. Further, the simulation results suggest that moderate eye movement speed is optimal and that the conventional prescription of the avoidance of routine eye movement following retinal detachment surgery should be relaxed. Additionally, to verify numerical results, some calculations were repeated with use of meshless method (method of fundamental solutions), which is relatively fast and easy to implement. The paper has been supported by 02/21/DSPB/3477 grant.

Keywords: CFD simulations, FEM analysis, meshless method, retinal detachment

Procedia PDF Downloads 338
15998 Structural Analysis of Kamaluddin Behzad's Works Based on Roland Barthes' Theory of Communication, 'Text and Image'

Authors: Mahsa Khani Oushani, Mohammad Kazem Hasanvand

Abstract:

Text and image have always been two important components in Iranian layout. The interactive connection between text and image has shaped the art of book design with multiple patterns. In this research, first the structure and visual elements in the research data were analyzed and then the position of the text element and the image element in relation to each other based on Roland Barthes theory on the three theories of text and image, were studied and analyzed and the results were compared, and interpreted. The purpose of this study is to investigate the pattern of text and image in the works of Kamaluddin Behzad based on three Roland Barthes communication theories, 1. Descriptive communication, 2. Reference communication, 3. Matched communication. The questions of this research are what is the relationship between text and image in Behzad's works? And how is it defined according to Roland Barthes theory? The method of this research has been done with a structuralist approach with a descriptive-analytical method in a library collection method. The information has been collected in the form of documents (library) and is a tool for collecting online databases. Findings show that the dominant element in Behzad's drawings is with the image and has created a reference relationship in the layout of the drawings, but in some cases it achieves a different relationship that despite the preference of the image on the page, the text is dispersed proportionally on the page and plays a more active role, played within the image. The text and the image support each other equally on the page; Roland Barthes equates this connection.

Keywords: text, image, Kamaluddin Behzad, Roland Barthes, communication theory

Procedia PDF Downloads 186
15997 Structural and Magnetic Properties of CoFe2-xNdxO4 Spinel Ferrite Nanoparticles

Authors: R. S. Yadav, J. Havlica, I. Kuřitka, Z. Kozakova, J. Masilko, M. Hajdúchová, V. Enev, J. Wasserbauer

Abstract:

In this present work, CoFe2-xNdxO4 (0.0 ≤ x ≥0.1) spinel ferrite nanoparticles were synthesized by starch-assisted sol-gel auto-combustion method. Powder X-ray diffraction patterns were revealed the formation of cubic spinel ferrite with the signature of NdFeO3 phase at higher Nd3+ concentration. The field emission scanning electron microscopy study demonstrated the spherical nanoparticle in the size range between 5-15 nm. Raman and Fourier Transform Infrared spectra supported the formation of the spinel ferrite structure in the nanocrystalline form. The X-ray photoelectron spectroscopy (XPS) analysis confirmed the presence of Co2+ and Fe3+ at octahedral as well as a tetrahedral site in CoFe2-xNdxO4 nanoparticles. The change in magnetic properties with a variation of concentration of Nd3+ ions in cobalt ferrite nanoparticles was observed.

Keywords: nanoparticles, spinel ferrites, sol-gel auto-combustion method, CoFe2-xNdxO4

Procedia PDF Downloads 492
15996 Effect of the Poisson’s Ratio on the Behavior of Epoxy Microbeam

Authors: Mohammad Tahmasebipour, Hosein Salarpour

Abstract:

Researchers suggest that variations in Poisson’s ratio affect the behavior of Timoshenko micro beam. Therefore, in this study, two epoxy Timoshenko micro beams with different dimensions were modeled using the finite element method considering all boundary conditions and initial conditions that govern the problem. The effect of Poisson’s ratio on the resonant frequency, maximum deflection, and maximum rotation of the micro beams was examined. The analyses suggest that an increased Poisson’s ratio reduces the maximum rotation and the maximum rotation and increases the resonant frequency. Results were consistent with those obtained using the couple stress, classical, and strain gradient elasticity theories.

Keywords: microbeam, microsensor, epoxy, poisson’s ratio, dynamic behavior, static behavior, finite element method

Procedia PDF Downloads 457
15995 Diagrid Structural System

Authors: K. Raghu, Sree Harsha

Abstract:

The interrelationship between the technology and architecture of tall buildings is investigated from the emergence of tall buildings in late 19th century to the present. In the late 19th century early designs of tall buildings recognized the effectiveness of diagonal bracing members in resisting lateral forces. Most of the structural systems deployed for early tall buildings were steel frames with diagonal bracings of various configurations such as X, K, and eccentric. Though the historical research a filtering concept is developed original and remedial technology- through which one can clearly understand inter-relationship between the technical evolution and architectural esthetic and further stylistic transition buildings. Diagonalized grid structures – “diagrids” - have emerged as one of the most innovative and adaptable approaches to structuring buildings in this millennium. Variations of the diagrid system have evolved to the point of making its use non-exclusive to the tall building. Diagrid construction is also to be found in a range of innovative mid-rise steel projects. Contemporary design practice of tall buildings is reviewed and design guidelines are provided for new design trends. Investigated in depths are the behavioral characteristics and design methodology for diagrids structures, which emerge as a new direction in the design of tall buildings with their powerful structural rationale and symbolic architectural expression. Moreover, new technologies for tall building structures and facades are developed for performance enhancement through design integration, and their architectural potentials are explored. By considering the above data the analysis and design of 40-100 storey diagrids steel buildings is carried out using E-TABS software with diagrids of various angle to be found for entire building which will be helpful to reduce the steel requirement for the structure. The present project will have to undertake wind analysis, seismic analysis for lateral loads acting on the structure due to wind loads, earthquake loads, gravity loads. All structural members are designed as per IS 800-2007 considering all load combination. Comparison of results in terms of time period, top storey displacement and inter-storey drift to be carried out. The secondary effect like temperature variations are not considered in the design assuming small variation.

Keywords: diagrid, bracings, structural, building

Procedia PDF Downloads 382
15994 Integrations of Students' Learning Achievements and Their Analytical Thinking Abilities with the Problem-Based Learning and the Concept Mapping Instructional Methods on Gene and Chromosome Issue at the 12th Grade Level

Authors: Waraporn Thaimit, Yuwadee Insamran, Natchanok Jansawang

Abstract:

Focusing on Analytical Thinking and Learning Achievement are the critical component of visual thinking that gives one the ability to solve problems quickly and effectively that allows to complex problems into components, and the result had been achieved or acquired form of the subject students of which resulted in changes within the individual as a result of activity in learning. The aims of this study are to administer on comparisons between students’ analytical thinking abilities and their learning achievements sample size consisted of 80 students who sat at the 12th grade level in 2 classes from Chaturaphak Phiman Ratchadaphisek School, the 40-student experimental group with the Problem-Based Learning (PBL) and 40-student controlling group with the Concept Mapping Instructional (CMI) methods were designed. Research instruments composed with the 5-lesson instructional plans to be assessed with the pretest and posttest techniques on each instructional method. Students’ responses of their analytical thinking abilities were assessed with the Analytical Thinking Tests and students’ learning achievements were tested of the Learning Achievement Tests. Statistically significant differences with the paired t-test and F-test (Two-way MANCOVA) between post- and pre-tests of the whole students in two chemistry classes were found. Associations between student learning outcomes in each instructional method and their analytical thinking abilities to their learning achievements also were found (ρ < .05). The use of two instructional methods for this study is revealed that the students perceive their abilities to be highly learning achievement in chemistry classes with the PBL group ought to higher than the CMI group. Suggestions that analytical thinking ability involves the process of gathering relevant information and identifying key issues related to the learning achievement information.

Keywords: comparisons, students learning achievements, analytical thinking abilities, the problem-based learning method, the concept mapping instructional method, gene and chromosome issue, chemistry classes

Procedia PDF Downloads 256
15993 Experimental Performance and Numerical Simulation of Double Glass Wall

Authors: Thana Ananacha

Abstract:

This paper reports the numerical and experimental performances of Double Glass Wall are investigated. Two configurations were considered namely, the Double Clear Glass Wall (DCGW) and the Double Translucent Glass Wall (DTGW). The coupled governing equations as well as boundary conditions are solved using the finite element method (FEM) via COMSOLTM Multiphysics. Temperature profiles and flow field of the DCGW and DTGW are reported and discussed. Different constant heat fluxes were considered namely 400 and 800 W.m-2 the corresponding initial condition temperatures were to 30.5 and 38.5 ºC respectively. The results show that the simulation results are in agreement with the experimental data. Conclusively, the model considered in this study could reasonable be used simulate the thermal and ventilation performance of the DCGW and DTGW configurations.

Keywords: thermal simulation, Double Glass Wall, velocity field, finite element method (FEM)

Procedia PDF Downloads 356
15992 Investigating the Effect of Groundwater Level on Nailing Arrangement in Excavation Stability

Authors: G. Khamooshian, A. Abbasimoshaei

Abstract:

Different methods are used to stabilize the sticks, among which the method of knitting is commonly used. In recent years, the use of nailing for the stability of excavation has been considered much, which is providing sufficient stability and controlling the structural defects of the guardian, also reduces the cost of the operation. In addition, this method is more prominent in deep excavations than other methods. The purpose of this paper is to investigate the effect of groundwater level and soil type on the length and designing of nails. In this paper, analysis and modeling for vertical arena with constant depth and different levels of groundwater have been done. Also, by changing the soil resistance parameters and design of the nails, an optimum arrangement was made and the effect of changes in groundwater level and soil's type on the design of the nails, the maximum axial force mobilized in the nails and the confidence coefficient for the stability of the groove was examined.

Keywords: excavation, soil effects, nailing, hole analyzing

Procedia PDF Downloads 174
15991 A Source Point Distribution Scheme for Wave-Body Interaction Problem

Authors: Aichun Feng, Zhi-Min Chen, Jing Tang Xing

Abstract:

A two-dimensional linear wave-body interaction problem can be solved using a desingularized integral method by placing free surface Rankine sources over calm water surface and satisfying boundary conditions at prescribed collocation points on the calm water surface. A new free-surface Rankine source distribution scheme, determined by the intersection points of free surface and body surface, is developed to reduce numerical computation cost. Associated with this, a new treatment is given to the intersection point. The present scheme results are in good agreement with traditional numerical results and measurements.

Keywords: source point distribution, panel method, Rankine source, desingularized algorithm

Procedia PDF Downloads 362
15990 The Impact of Using Microlearning to Enhance Students' Programming Skills and Learning Motivation

Authors: Ali Alqarni

Abstract:

This study aims to explore the impact of microlearning on the development of the programming skills as well as on the motivation for learning of first-year high schoolers in Jeddah. The sample consists of 78 students, distributed as 40 students in the control group, and 38 students in the treatment group. The quasi-experimental method, which is a type of quantitative method, was used in this study. In addition to the technological tools used to create and deliver the digital content, the study utilized two tools to collect the data: first, an observation card containing a list of programming skills, and second, a tool to measure the student's motivation for learning. The findings indicate that microlearning positively impacts programming skills and learning motivation for students. The study, then, recommends implementing and expanding the use of microlearning in educational contexts both in the general education level and the higher education level.

Keywords: educational technology, teaching strategies, online learning, microlearning

Procedia PDF Downloads 121
15989 Investigating the Motion of a Viscous Droplet in Natural Convection Using the Level Set Method

Authors: Isadora Bugarin, Taygoara F. de Oliveira

Abstract:

Binary fluids and emulsions, in general, are present in a vast range of industrial, medical, and scientific applications, showing complex behaviors responsible for defining the flow dynamics and the system operation. However, the literature describing those highlighted fluids in non-isothermal models is currently still limited. The present work brings a detailed investigation on droplet migration due to natural convection in square enclosure, aiming to clarify the effects of drop viscosity on the flow dynamics by showing how distinct viscosity ratios (droplet/ambient fluid) influence the drop motion and the final movement pattern kept on stationary regimes. The analysis was taken by observing distinct combinations of Rayleigh number, drop initial position, and viscosity ratios. The Navier-Stokes and Energy equations were solved considering the Boussinesq approximation in a laminar flow using the finite differences method combined with the Level Set method for binary flow solution. Previous results collected by the authors showed that the Rayleigh number and the drop initial position affect drastically the motion pattern of the droplet. For Ra ≥ 10⁴, two very marked behaviors were observed accordingly with the initial position: the drop can travel either a helical path towards the center or a cyclic circular path resulting in a closed cycle on the stationary regime. The variation of viscosity ratio showed a significant alteration of pattern, exposing a large influence on the droplet path, capable of modifying the flow’s behavior. Analyses on viscosity effects on the flow’s unsteady Nusselt number were also performed. Among the relevant contributions proposed in this work is the potential use of the flow initial conditions as a mechanism to control the droplet migration inside the enclosure.

Keywords: binary fluids, droplet motion, level set method, natural convection, viscosity

Procedia PDF Downloads 113
15988 Sound Insulation between Buildings: The Impact Noise Transmission through Different Floor Configurations

Authors: Abdelouahab Bouttout, Mohamed Amara

Abstract:

The present paper examines the impact noise transmission through some floor building assemblies. The Acoubat software numerical simulation has been used to simulate the impact noise transmission through different floor configurations used in Algerian construction mode. The results are compared with the available measurements. We have developed two experimental methods, i) field method, and ii) laboratory method using Brüel and Kjær equipments. The results show that the different cases of floor configurations need some improvement to ensure the acoustic comfort in the receiving apartment. The recommended value of the impact sound level in the receiving room should not exceed 58 dB. The important results obtained in this paper can be used as platform to improve the Algerian building acoustic regulation aimed at the construction of the multi-storey residential building.

Keywords: impact noise, building acoustic, floor insulation, resilient material

Procedia PDF Downloads 370
15987 The Effect of Additives on Characterization and Photocatalytic Activity of Ag-TiO₂ Nanocomposite Prepared via Sol-Gel Process

Authors: S. Raeis Farshid, B. Raeis Farshid

Abstract:

Ag-TiO₂ nanocomposites were prepared by the sol-gel method with and without additives such as carboxy methyl cellulose (CMC), polyethylene glycol (PEG), polyvinyl pyrrolidone (PVP), and hydroxyl propyl cellulose (HPC). The characteristics of the prepared Ag-TiO₂ nanocomposites were identified by Fourier Transform Infra-Red spectroscopy (FTIR), X-Ray Diffraction (XRD), and scanning electron microscopy (SEM) methods. The additives have a significant effect on the particle size distribution and photocatalytic activity of Ag-TiO₂ nanocomposites. SEM images have shown that the particle size distribution of Ag-TiO₂ nanocomposite in the presence of HPC was the best in comparison to the other samples. The photocatalytic activity of the synthesized nanocomposites was investigated for decolorization of methyl orange (MO) in water under UV-irradiation in a batch reactor, and the results showed that the photocatalytic activity of the nanocomposites had been increased by CMC, PEG, PVP, and HPC, respectively.

Keywords: sol-gel method, Ag-TiO₂, decolorization, photocatalyst, nanocomposite

Procedia PDF Downloads 74
15986 Lateral Torsional Buckling: Tests on Glued Laminated Timber Beams

Authors: Vera Wilden, Benno Hoffmeister, Markus Feldmann

Abstract:

Glued laminated timber (glulam) is a preferred choice for long span girders, e.g., for gyms or storage halls. While the material provides sufficient strength to resist the bending moments, large spans lead to increased slenderness of such members and to a higher susceptibility to stability issues, in particular to lateral torsional buckling (LTB). Rules for the determination of the ultimate LTB resistance are provided by Eurocode 5. The verifications of the resistance may be performed using the so called equivalent member method or by means of theory 2nd order calculations (direct method), considering equivalent imperfections. Both methods have significant limitations concerning their applicability; the equivalent member method is limited to rather simple cases; the direct method is missing detailed provisions regarding imperfections and requirements for numerical modeling. In this paper, the results of a test series on slender glulam beams in three- and four-point bending are presented. The tests were performed in an innovative, newly developed testing rig, allowing for a very precise definition of loading and boundary conditions. The load was introduced by a hydraulic jack, which follows the lateral deformation of the beam by means of a servo-controller, coupled with the tested member and keeping the load direction vertically. The deformation-controlled tests allowed for the identification of the ultimate limit state (governed by elastic stability) and the corresponding deformations. Prior to the tests, the structural and geometrical imperfections were determined and used later in the numerical models. After the stability tests, the nearly undamaged members were tested again in pure bending until reaching the ultimate moment resistance of the cross-section. These results, accompanied by numerical studies, were compared to resistance values obtained using both methods according to Eurocode 5.

Keywords: experimental tests, glued laminated timber, lateral torsional buckling, numerical simulation

Procedia PDF Downloads 226