Search results for: minimum deviation method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20749

Search results for: minimum deviation method

16099 Elaboration of Composites with Thermoplastic Matrix Polypropylene Charged by the Polyaniline Synthesized by the Self-Curling Method

Authors: Selma Saadia, Nacira Naar, Ahmed Benaboura

Abstract:

This work is dedicated to the elaboration of composites (PP/PANI) with Polypropylene (PP) as thermoplastic polymer and the polyaniline (PANI) as electric charge doped with sulfanilic acid (PANI-As). These realized formulations are intended for the antistatic domain. The used conductive polymer is synthesized by the method self-curling which proved the obtaining of the nanoparticles of PANI in regular morphological forms. The PANI and PP composites are fabricated into a film by a twin-screw extruding. Several methods of characterization are proposed: spectroscopic, thermal, and electric. The realized composites proved a pseudo-homogeneous aspect and the threshold percolation study, showed that the formulation with 7% of PANI presents a better formulation which can be used in the antistatic domain.

Keywords: extruding, PANI, Polypropylene, sulfanilic acid, self-Curling

Procedia PDF Downloads 228
16098 Classification of Construction Projects

Authors: M. Safa, A. Sabet, S. MacGillivray, M. Davidson, K. Kaczmarczyk, C. T. Haas, G. E. Gibson, D. Rayside

Abstract:

To address construction project requirements and specifications, scholars and practitioners need to establish a taxonomy according to a scheme that best fits their need. While existing characterization methods are continuously being improved, new ones are devised to cover project properties which have not been previously addressed. One such method, the Project Definition Rating Index (PDRI), has received limited consideration strictly as a classification scheme. Developed by the Construction Industry Institute (CII) in 1996, the PDRI has been refined over the last two decades as a method for evaluating a project's scope definition completeness during front-end planning (FEP). The main contribution of this study is a review of practical project classification methods, and a discussion of how PDRI can be used to classify projects based on their readiness in the FEP phase. The proposed model has been applied to 59 construction projects in Ontario, and the results are discussed.

Keywords: project classification, project definition rating index (PDRI), risk, project goals alignment

Procedia PDF Downloads 664
16097 Synthesizing and Fabrication of Pani-(SnO₂, ZnO)/rGO by Sol-Gel Method to Develop a Biosensor Thin-Films on Top Glass Substrate

Authors: Mohammad Arifin, Huda Abdullah, Norshafadzila Mohammad Naim

Abstract:

The fabricated PANI-(SnO₂, ZnO)/rGO nanocomposite thin films for the E. coli bacteria sensor were investigated. The nanocomposite thin films were prepared by the sol-gel method and deposited on the glass substrate using the spin-coating technique. The internal structure and surface morphology of the thin films have been analyzed by X-ray diffraction (XRD), field emission scanning electron microscopy (FESEM), and atomic force microscopy (AFM). The optical properties of the films were investigated by UV-Vis spectroscopy, Raman spectroscopy, and Fourier transform infrared spectroscopy (FTIR). The sensitivity performance was identified by measuring the changing conductivity before and after the incubation of E. coli bacteria using current-voltage (I-V) and cyclic voltammetry (C-V) measurements.

Keywords: PANI-(SnO₂, ZnO)/rGO, nanocomposite, bacteria sensor, thin films

Procedia PDF Downloads 98
16096 Authentication Based on Hand Movement by Low Dimensional Space Representation

Authors: Reut Lanyado, David Mendlovic

Abstract:

Most biological methods for authentication require special equipment and, some of them are easy to fake. We proposed a method for authentication based on hand movement while typing a sentence with a regular camera. This technique uses the full video of the hand, which is harder to fake. In the first phase, we tracked the hand joints in each frame. Next, we represented a single frame for each individual using our Pose Agnostic Rotation and Movement (PARM) dimensional space. Then, we indicated a full video of hand movement in a fixed low dimensional space using this method: Fixed Dimension Video by Interpolation Statistics (FDVIS). Finally, we identified each individual in the FDVIS representation using unsupervised clustering and supervised methods. Accuracy exceeds 96% for 80 individuals by using supervised KNN.

Keywords: authentication, feature extraction, hand recognition, security, signal processing

Procedia PDF Downloads 114
16095 Effect of Surface Quality of 3D Printed Impeller on the Performance of a Centrifugal Compressor

Authors: Nader Zirak, Mohammadali Shirinbayan, Abbas Tcharkhtchi

Abstract:

Additive manufacturing is referred to as a method for fabrication of parts with a mechanism of layer by layer. Suitable economic efficiency and the ability to fabrication complex parts have made this method the focus of studies and industry. In recent years many studies focused on the fabrication of impellers, which is referred to as a key component of turbomachinery, through this technique. This study considers the important effect of the final surface quality of the impeller on the performance of the system, investigates the fabricated printed rotors through the fused deposition modeling with different process parameters. In this regard, the surface of each impeller was analyzed through the 3D scanner. The results show the vital role of surface quality on the final performance of the centrifugal compressor.

Keywords: additive manufacturing, impeller, centrifugal compressor, performance

Procedia PDF Downloads 134
16094 Unsteady Rayleigh-Bénard Convection of Nanoliquids in Enclosures

Authors: P. G. Siddheshwar, B. N. Veena

Abstract:

Rayleigh-B´enard convection of a nanoliquid in shallow, square and tall enclosures is studied using the Khanafer-Vafai-Lightstone single-phase model. The thermophysical properties of water, copper, copper-oxide, alumina, silver and titania at 3000 K under stagnant conditions that are collected from literature are used in calculating thermophysical properties of water-based nanoliquids. Phenomenological laws and mixture theory are used for calculating thermophysical properties. Free-free, rigid-rigid and rigid-free boundary conditions are considered in the study. Intractable Lorenz model for each boundary combination is derived and then reduced to the tractable Ginzburg-Landau model. The amplitude thus obtained is used to quantify the heat transport in terms of Nusselt number. Addition of nanoparticles is shown not to alter the influence of the nature of boundaries on the onset of convection as well as on heat transport. Amongst the three enclosures considered, it is found that tall and shallow enclosures transport maximum and minimum energy respectively. Enhancement of heat transport due to nanoparticles in the three enclosures is found to be in the range 3% - 11%. Comparison of results in the case of rigid-rigid boundaries is made with those of an earlier work and good agreement is found. The study has limitations in the sense that thermophysical properties are calculated by using various quantities modelled for static condition.

Keywords: enclosures, free-free, rigid-rigid, rigid-free boundaries, Ginzburg-Landau model, Lorenz model

Procedia PDF Downloads 237
16093 Conceptual Modeling of the Relationship between Project Management Practices and Knowledge Absorptive Capacity Using Interpretive Structural Modeling Method

Authors: Seyed Abdolreza Mosavi, Alireza Babakhan, Elham Sadat Hoseinifard

Abstract:

Knowledge-based firms need to design mechanisms for continuous absorptive and creation of knowledge in order to ensure their survival in the competitive arena and to follow the path of development. Considering the project-oriented nature of product development activities in knowledge-based firms on the one hand and the importance of analyzing the factors affecting knowledge absorptive capacity in these firms on the other, the purpose of this study is to identify and classify the factors affecting project management practices on absorptive knowledge capacity. For this purpose, we have studied and reviewed the theoretical literature in the field of project management and absorptive knowledge capacity so as to clarify its dimensions and indexes. Then, using the ISM method, the relationship between them has been studied. To collect data, 21 questionnaires were distributed in project-oriented knowledge-based companies. The results of the ISM method analysis provide a model for the relationship between project management activities and knowledge absorptive capacity, which includes knowledge acquisition capacity, scope management, time management, cost management, quality management, human resource management, communications management, procurement management, risk management, stakeholders management and integration management. Having conducted the MICMAC analysis, we divided the variables into three groups of independent, relational and dependent variables and came up with no variables to be included in the group of autonomous variables.

Keywords: knowledge absorptive capacity, project management practices, knowledge-based firms, interpretive structural modeling

Procedia PDF Downloads 185
16092 Reliable Method for Estimating Rating Curves in the Natural Rivers

Authors: Arash Ahmadi, Amirreza Kavousizadeh, Sanaz Heidarzadeh

Abstract:

Stage-discharge curve is one of the conventional methods for continuous river flow measurement. In this paper, an innovative approach is proposed for predicting the stage-discharge relationship using the application of isovel contours. Using the proposed method, it is possible to estimate the stage-discharge curve in the whole section with only using discharge information from just one arbitrary water level. For this purpose, multivariate relationships are used to determine the mean velocity in a cross-section. The unknown exponents of the proposed relationship have been obtained by using the second version of the Strength Pareto Evolutionary Algorithm (SPEA2), and the appropriate equation was selected by applying the TOPSIS (Technique for Order Preferences by Similarity to an Ideal Solution) approach. Results showed a close agreement between the estimated and observed data in the different cross-sections.

Keywords: rating curves, SPEA2, natural rivers, bed roughness distribution

Procedia PDF Downloads 146
16091 Innovation in PhD Training in the Interdisciplinary Research Institute

Authors: B. Shaw, K. Doherty

Abstract:

The Cultural Communication and Computing Research Institute (C3RI) is a diverse multidisciplinary research institute including art, design, media production, communication studies, computing and engineering. Across these disciplines it can seem like there are enormous differences of research practice and convention, including differing positions on objectivity and subjectivity, certainty and evidence, and different political and ethical parameters. These differences sit within, often unacknowledged, histories, codes, and communication styles of specific disciplines, and it is all these aspects that can make understanding of research practice across disciplines difficult. To explore this, a one day event was orchestrated, testing how a PhD community might communicate and share research in progress in a multi-disciplinary context. Instead of presenting results at a conference, research students were tasked to articulate their method of inquiry. A working party of students from across disciplines had to design a conference call, visual identity and an event framework that would work for students across all disciplines. The process of establishing the shape and identity of the conference was revealing. Even finding a linguistic frame that would meet the expectations of different disciplines for the conference call was challenging. The first abstracts submitted either resorted to reporting findings, or only described method briefly. It took several weeks of supported intervention for research students to get ‘inside’ their method and to understand their research practice as a process rich with philosophical and practical decisions and implications. In response to the abstracts the conference committee generated key methodological categories for conference sessions, including sampling, capturing ‘experience’, ‘making models’, researcher identities, and ‘constructing data’. Each session involved presentations by visual artists, communications students and computing researchers with inter-disciplinary dialogue, facilitated by alumni Chairs. The apparently simple focus on method illuminated research process as a site of creativity, innovation and discovery, and also built epistemological awareness, drawing attention to what is being researched and how it can be known. It was surprisingly difficult to limit students to discussing method, and it was apparent that the vocabulary available for method is sometimes limited. However, by focusing on method rather than results, the genuine process of research, rather than one constructed for approval, could be captured. In unlocking the twists and turns of planning and implementing research, and the impact of circumstance and contingency, students had to reflect frankly on successes and failures. This level of self – and public- critique emphasised the degree of critical thinking and rigour required in executing research and demonstrated that honest reportage of research, faults and all, is good valid research. The process also revealed the degree that disciplines can learn from each other- the computing students gained insights from the sensitive social contextualizing generated by communications and art and design students, and art and design students gained understanding from the greater ‘distance’ and emphasis on application that computing students applied to their subjects. Finding the means to develop dialogue across disciplines makes researchers better equipped to devise and tackle research problems across disciplines, potentially laying the ground for more effective collaboration.

Keywords: interdisciplinary, method, research student, training

Procedia PDF Downloads 191
16090 Investigation of Effects of Geomagnetic Storms Produced by Different Solar Sources on the Total Electron Content (TEC)

Authors: P. K. Purohit, Azad A. Mansoori, Parvaiz A. Khan, Purushottam Bhawre, Sharad C. Tripathi, A. M. Aslam, Malik A. Waheed, Shivangi Bhardwaj, A. K. Gwal

Abstract:

The geomagnetic storm represents the most outstanding example of solar wind-magnetospheric interaction, which causes global disturbances in the geomagnetic field as well as the trigger ionospheric disturbances. We study the behaviour of ionospheric Total Electron Content (TEC) during the geomagnetic storms. For the present investigation we have selected 47 intense geomagnetic storms (Dst ≤ -100nT) that were observed during the solar cycle 23 i.e. during 1998-2006. We then categorized these storms into four categories depending upon their solar sources like Magnetic Cloud (MC), Co-rotating Interaction Region (CIR), SH+ICME and SH+MC. We then studied the behaviour of ionospheric TEC at a mid latitude station Usuda (36.13N, 138.36E), Japan during these storm events produced by four different solar sources. During our study we found that the smooth variations in TEC are replaced by rapid fluctuations and the value of TEC is strongly enhanced during the time of these storms belonging to all the four categories. However, the greatest enhancements in TEC are produced during those geomagnetic storms which are either caused by sheath driven magnetic cloud (SH+MC) or sheath driven ICME (SH+ICME). We also derived the correlation between the TEC enhancements produced during storms of each category with the minimum Dst. We found the strongest correlation exists for the SH+ICME category followed by SH+MC, MC and finally CIR. Since the most intense storms were either caused by SH+ICME or SH+MC while the least intense storms were caused by CIR, consequently the correlation was the strongest with SH+ICME and SH+MC and least with CIR.

Keywords: GPS, TEC, geomagnetic storm, sheath driven magnetic cloud

Procedia PDF Downloads 530
16089 Study of Dispersion of Silica and Chitosan Nanoparticles into Gelatin Film

Authors: Mohit Batra, Noel Sarkar, Jayeeta Mitra

Abstract:

In this study silica nanoparticles were synthesized using different methods and different silica sources namely Tetraethyl ortho silicate (TEOS), Sodium Silicate, Rice husk while chitosan nanoparticles were prepared with ionic gelation method using Sodium tripolyphosphate (TPP). Size and texture of silica nanoparticles were studied using field emission scanning electron microscopy (FESEM) and transmission electron microscopy (TEM) along with the effect of change in concentration of various reagents in different synthesis processes. Size and dispersion of Silica nanoparticles prepared from TEOS using stobber’s method were found better than other methods while nanoparticles prepared using rice husk were cheaper than other ones. Catalyst found to play a very significant role in controlling the size of nanoparticles in all methods.

Keywords: silica nanoparticles, gelatin, bio-nanocomposites, SEM, TEM, chitosan

Procedia PDF Downloads 301
16088 The Impact of Total Parenteral Nutrition on Pediatric Stem Cell Transplantation and Its Complications

Authors: R. Alramyan, S. Alsalamah, R. Alrashed, R. Alakel, F. Altheyeb, M. Alessa

Abstract:

Background: Nutritional support with total parenteral nutrition (TPN) is usually commenced with hematopoietic stem cell transplantation (HSCT) patients. However, it has its benefits and risks. Complications related to central venous catheter such as infections, and metabolic disturbances, including abnormal liver function, is usually of concern in such patients. Methods: A retrospective charts review of all pediatric patients who underwent HSCT between the period 2015-2018 in a tertiary hospital in Riyadh, Saudi Arabia. Patients' demographics, types of conditioning, type of nutrition, and patients' outcomes were collected. Statistical analysis was conducted using SPSS version 22. Frequencies and percentages were used to describe categorical variables. Mean, and standard deviation were used for continuous variables. A P value of less than 0.05 was considered as statically significant. Results: a total of 162 HSCTs were identified during the period mentioned. Indication of allogenic transplant included hemoglobinopathy in 50 patients (31%), acute lymphoblastic leukemia in 21 patients (13%). TPN was used in 96 patients (59.30%) for a median of 14 days, nasogastric tube feeding (NGT) in 16 (9.90%) patients for a median of 11 days, and 71 of patients (43.80%) were able to tolerate oral feeding. Out of the 96 patients (59.30%) who were dependent on TPN, 64 patients (66.7%) had severe mucositis in comparison to 17 patients (25.8%) who were either on NGT or tolerated oral intake. (P-value= 0.00). Sinusoidal obstruction syndrome (SOS) was seen in 14 patients (14.6%) who were receiving TPN compared to none in non-TPN patients (P=value 0.001). Moreover, majority of patients who had SOS received myeloablative conditioning therapy for non-malignant disease (hemoglobinopathy). However, there were no statistically significant differences in Graft-vs-Host Disease (both acute and chronic), bacteremia, and patient outcome between both groups. Conclusions: Nutritional support using TPN is used in majority of patients, especially post-myeloablative conditioning associated with severe mucositis. TPN was associated with VOD, especially in hemoglobinopathy patients who received myeloablative therapy. This may emphasize on use of preventative measures such as fluid restriction, use of diuretics, or defibrotide in high-risk patients.

Keywords: hematopoeitic stem cell transplant, HSCT, stem cell transplant, sinusoidal obstruction syndrome, total parenteral nutrition

Procedia PDF Downloads 141
16087 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 402
16086 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 208
16085 The Nonlinear Dynamic Response of a Rotor System Supported by Hydrodynamic Journal Bearings

Authors: Amira Amamou, Mnaouar Chouchane

Abstract:

This paper investigates the bifurcation and nonlinear behavior of two degrees of freedom model of a symmetrical balanced rigid rotor supported by two identical journal bearings. The fluid film hydrodynamic reactions are modeled by applying both the short and the long bearing approximation and using half Sommerfeld solution. A numerical integration of equations of the journal centre motion is presented to predict the presence and the size of stable or unstable limit cycles in the neighborhood of the stability critical speed. For their stability margins, a continuation method based on the predictor-corrector mechanism is used. The numerical results of responses show that stability and bifurcation behaviors of periodic motions depend strongly on bearing parameters and its dynamic characteristics.

Keywords: hydrodynamic journal bearing, nonlinear stability, continuation method, bifurcations

Procedia PDF Downloads 399
16084 Evaluation of Academic Research Projects Using the AHP and TOPSIS Methods

Authors: Murat Arıbaş, Uğur Özcan

Abstract:

Due to the increasing number of universities and academics, the fund of the universities for research activities and grants/supports given by government institutions have increased number and quality of academic research projects. Although every academic research project has a specific purpose and importance, limited resources (money, time, manpower etc.) require choosing the best ones from all (Amiri, 2010). It is a pretty hard process to compare and determine which project is better such that the projects serve different purposes. In addition, the evaluation process has become complicated since there are more than one evaluator and multiple criteria for the evaluation (Dodangeh, Mojahed and Yusuff, 2009). Mehrez and Sinuany-Stern (1983) determined project selection problem as a Multi Criteria Decision Making (MCDM) problem. If a decision problem involves multiple criteria and objectives, it is called as a Multi Attribute Decision Making problem (Ömürbek & Kınay, 2013). There are many MCDM methods in the literature for the solution of such problems. These methods are AHP (Analytic Hierarchy Process), ANP (Analytic Network Process), TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation), UTADIS (Utilities Additives Discriminantes), ELECTRE (Elimination et Choix Traduisant la Realite), MAUT (Multiattribute Utility Theory), GRA (Grey Relational Analysis) etc. Teach method has some advantages compared with others (Ömürbek, Blacksmith & Akalın, 2013). Hence, to decide which MCDM method will be used for solution of the problem, factors like the nature of the problem, types of choices, measurement scales, type of uncertainty, dependency among the attributes, expectations of decision maker, and quantity and quality of the data should be considered (Tavana & Hatami-Marbini, 2011). By this study, it is aimed to develop a systematic decision process for the grant support applications that are expected to be evaluated according to their scientific adequacy by multiple evaluators under certain criteria. In this context, project evaluation process applied by The Scientific and Technological Research Council of Turkey (TÜBİTAK) the leading institutions in our country, was investigated. Firstly in the study, criteria that will be used on the project evaluation were decided. The main criteria were selected among TÜBİTAK evaluation criteria. These criteria were originality of project, methodology, project management/team and research opportunities and extensive impact of project. Moreover, for each main criteria, 2-4 sub criteria were defined, hence it was decided to evaluate projects over 13 sub-criterion in total. Due to superiority of determination criteria weights AHP method and provided opportunity ranking great number of alternatives TOPSIS method, they are used together. AHP method, developed by Saaty (1977), is based on selection by pairwise comparisons. Because of its simple structure and being easy to understand, AHP is the very popular method in the literature for determining criteria weights in MCDM problems. Besides, the TOPSIS method developed by Hwang and Yoon (1981) as a MCDM technique is an alternative to ELECTRE method and it is used in many areas. In the method, distance from each decision point to ideal and to negative ideal solution point was calculated by using Euclidian Distance Approach. In the study, main criteria and sub-criteria were compared on their own merits by using questionnaires that were developed based on an importance scale by four relative groups of people (i.e. TUBITAK specialists, TUBITAK managers, academics and individuals from business world ) After these pairwise comparisons, weight of the each main criteria and sub-criteria were calculated by using AHP method. Then these calculated criteria’ weights used as an input in TOPSİS method, a sample consisting 200 projects were ranked on their own merits. This new system supported to opportunity to get views of the people that take part of project process including preparation, evaluation and implementation on the evaluation of academic research projects. Moreover, instead of using four main criteria in equal weight to evaluate projects, by using weighted 13 sub-criteria and decision point’s distance from the ideal solution, systematic decision making process was developed. By this evaluation process, new approach was created to determine importance of academic research projects.

Keywords: Academic projects, Ahp method, Research projects evaluation, Topsis method.

Procedia PDF Downloads 578
16083 Anthropomorphism in the Primate Mind-Reading Debate: A Critique of Sober's Justification Argument

Authors: Boyun Lee

Abstract:

This study aims to discuss whether anthropomorphism some scientists tend to use in cross-species comparison can be justified epistemologically, especially in the primate mind-reading debate. Concretely, this study critically analyzes Elliott Sober’s argument about mind-reading hypothesis (MRH), an anthropomorphic hypothesis which states that nonhuman primates (e.g., chimpanzee) are mind-readers like humans. Although many scientists consider anthropomorphism as an error and choosing anthropomorphic hypothesis like MRH without any definite evidence invalid, Sober advocates that anthropomorphism is supported by cladistic parsimony that suggests choosing the simplest hypothesis postulating the minimum number of evolutionary changes, which can be justified epistemologically in the mind-reading debate. However, his argument has several problems. First, Reichenbach’s theorem which Sober uses in process of showing that MRH has the higher likelihood than its competing hypothesis, behavior-reading hypothesis (BRH), does not fit in the context of inferring the evolutionary relationship. Second, the phylogenetic tree Sober supports is one of the possible scenarios of MRH, and even without this problem, it is difficult to prove that the possibility nonhuman primate species and human share mind-reading ability is higher than the possibility of the other case, considering how evolution occurs. Consequently, it seems hard to justify anthropomorphism of MRH under Sober’s argument. Some scientists and philosophers say that anthropomorphism sometimes helps observe interesting phenomena or make hypotheses in comparative biology. Nonetheless, we cannot determine that it provides answers about why and how the interesting phenomena appear or which of the hypotheses is better, at least the mind-reading debate, under the current state.

Keywords: anthropomorphism, cladistic parsimony, comparative biology, mind-reading debate

Procedia PDF Downloads 159
16082 Improving Human Resources Management in Indian Civil Service

Authors: Anant Deogaonkar, Archana Nanoty

Abstract:

The term civil service plays a vital role in functioning of any government. In today’s modern era of globalization civil services essentially contribute for the success of the good governance system. The civil service in India refers to the body of government officials employed in civil occupations that are neither political nor judicial. The Indian Civil Services were created to foster the idea of unity in diversity with the expectation of giving continuity and change in administration independent of the political scenario and turmoil affecting the country. The civil service is an integral part of administration and the structures of administration to determine the way civil service functions. The concept of good governance necessarily precludes the effective human resource management ensuring the root level reach of the good governance. The serious matter of concern is the element of change. The civil service in general has maintained status quo instead of sweeping changes in social and economic scenario. One may disagree for this but it is a fact on the street that the Indian civil service was not able to deliver up to the expectations of the people and was lacking on the service front. The effective management of human resources at civil service needs to be prioritized and will form a key factor in successful delivery of the desired results may be in minimum duration. This paper focuses on the various ways of effective management of human resources in civil services. It also highlights the importance of improvement in human resource management in civil services with the detailed discussion of positives and negatives if any of the human resource management in civil services.

Keywords: civil services, human resources management, India, governance

Procedia PDF Downloads 301
16081 Meta-Analysis of Particulate Matter Production in Developing and Developed Countries

Authors: Hafiz Mehtab Gull Nasir

Abstract:

Industrial development and urbanization have significant impacts on air emissions, and their relationship diverges at different stages of economic progress. The revolution further propelled these activities as principal paths to economic and social transformation; nevertheless, the paths also promoted environmental degradation. Resultantly, both developed and developing countries undergone through fast-paced development; in which developed countries implemented legislation towards environmental pollution control however developing countries took the advantage of technology without caring about the environment. In this study, meta-analysis is performed on production of particulate matter (i.e., PM10 and PM2.5) from urbanized cities of first, second and third world countries to assess the air quality. The cities were selected based on ranked set principles. In case of PM10, third world countries showed highest PM level (~95% confidence interval of 0.74-1.86) followed by second world countries but with managed situation. Besides, first, world countries indicated the lowest pollution (~95% confidence interval of 0.12-0.2). Similarly, highest level of PM2.5 was produced by third world countries followed by the second and first world countries. Hereby, level of PM2.5 was not significantly different for both second and third world countries; however, first world countries showed minimum PM load. Finally, the study revealed different that levels of pollution status exist among different countries; whereas developed countries also devised better strategies towards pollution control while developing countries are least caring about their environmental resources. It is suggested that although industrialization and urbanization are directly involved with interference in natural elements, however, production of nature appears to be more societal rather hermetical.

Keywords: meta-analysis, particulate matter, developing countries, urbanization

Procedia PDF Downloads 333
16080 Pricing, Production and Inventory Policies Manufacturing under Stochastic Demand and Continuous Prices

Authors: Masoud Rabbani, Majede Smizadeh, Hamed Farrokhi-Asl

Abstract:

We study jointly determining prices and production in a multiple period horizon under a general non-stationary stochastic demand with continuous prices. In some periods we need to increase capacity of production to satisfy demand. This paper presents a model to aid multi-period production capacity planning by quantifying the trade-off between product quality and production cost. The product quality is estimated as the statistical variation from the target performances obtained from the output tolerances of the production machines that manufacture the components. We consider different tolerance for different machines that use to increase capacity. The production cost is estimated as the total cost of owning and operating a production facility during the planning horizon.so capacity planning has cost that impact on price. Pricing products often turns out to be difficult to measure them because customers have a reservation price to pay that impact on price and demand. We decide to determine prices and production for periods after enhance capacity and consider reservation price to determine price. First we use an algorithm base on fuzzy set of the optimal objective function values to determine capacity planning by determine maximize interval from upper bound in minimum objectives and define weight for objectives. Then we try to determine inventory and pricing policies. We can use a lemma to solve a problem in MATLAB and find exact answer.

Keywords: price policy, inventory policy, capacity planning, product quality, epsilon -constraint

Procedia PDF Downloads 557
16079 Simplifying Seismic Vulnerability Analysis for Existing Reinforced Concrete Buildings

Authors: Maryam Solgi, Behzad Shahmohammadi, Morteza Raissi Dehkordi

Abstract:

One of the main steps for seismic retrofitting of buildings is to determine the vulnerability of structures. While current procedures for evaluating existing buildings are complicated, and there is no limitation between short, middle-high, and tall buildings. This research utilizes a simplified method for assessing structures, which is adequate for existing reinforced concrete buildings. To approach this aim, Simple Lateral Mechanisms Analysis (SLaMA) procedure proposed by NZSEE (New Zealand Society for Earthquake Engineering) has been carried out. In this study, three RC moment-resisting frame buildings are determined. First, these buildings have been evaluated by inelastic static procedure (Pushover) based on acceptance criteria. Then, Park-Ang Damage Index is determined for the whole members of each building by Inelastic Time History Analysis. Next, the Simple Lateral Mechanisms Analysis procedure, a hand method, is carried out to define the capacity of structures. Ultimately, existing procedures are compared with Peak Ground Acceleration caused to fail (PGAfail). The results of this comparison emphasize that the Pushover procedure and SLaMA method define a greater value of PGAfail than the Park-Ang Damage model.

Keywords: peak ground acceleration caused to fail, reinforced concrete moment-frame buildings, seismic vulnerability analysis, simple lateral mechanisms analysis

Procedia PDF Downloads 83
16078 Displacement Based Design of a Dual Structural System

Authors: Romel Cordova Shedan

Abstract:

The traditional seismic design is the methodology of Forced Based Design (FBD). The Displacement Based Design (DBD) is a seismic design that considers structural damage to achieve a failure mechanism of the structure before the collapse. It is easier to quantify damage of a structure with displacements rather than forces. Therefore, a structure to achieve an inelastic displacement design with good ductility, it is necessary to be damaged. The first part of this investigation is about differences between the methodologies of DBD and FBD with some DBD advantages. In the second part, there is a study case about a dual building 5-story, which is regular in plan and elevation. The building is located in a seismic zone, which acceleration in firm soil is 45% of the acceleration of gravity. Then it is applied both methodologies into the study case to compare its displacements, shear forces and overturning moments. In the third part, the Dynamic Time History Analysis (DTHA) is done, to compare displacements with DBD and FBD methodologies. Three accelerograms were used and the magnitude of the acceleration scaled to be spectrum compatible with design spectrum. Then, using ASCE 41-13 guidelines, the hinge plastics were assigned to structure. Finally, both methodologies results about study case are compared. It is important to take into account that the seismic performance level of the building for DBD is greater than FBD method. This is due to drifts of DBD are in the order of 2.0% and 2.5% comparing with FBD drifts of 0.7%. Therefore, displacements of DBD is greater than the FBD method. Shear forces of DBD result greater than FBD methodology. These strengths of DBD method ensures that structure achieves design inelastic displacements, because those strengths were obtained due to a displacement spectrum reduction factor which depends on damping and ductility of the dual system. Also, the displacements for the study case for DBD results to be greater than FBD and DTHA. In that way, it proves that the seismic performance level of the building for DBD is greater than FBD method. Due to drifts of DBD which are in the order of 2.0% and 2.5% compared with little FBD drifts of 0.7%.

Keywords: displacement-based design, displacement spectrum reduction factor, dynamic time history analysis, forced based design

Procedia PDF Downloads 220
16077 Empirical Mode Decomposition Based Denoising by Customized Thresholding

Authors: Wahiba Mohguen, Raïs El’hadi Bekka

Abstract:

This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).

Keywords: customized thresholding, ECG signal, EMD, hard thresholding, soft-thresholding

Procedia PDF Downloads 293
16076 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 60
16075 The Attitudinal Effects of Dental Hygiene Students When Changing Conventional Practices of Preventive Therapy in the Dental Hygiene Curriculum

Authors: Shawna Staud, Mary Kaye Scaramucci

Abstract:

Objective: Rubber cup polishing has been a traditional method of preventative therapy in dental hygiene treatment. Newer methods such as air polishing have changed the way dental hygiene care is provided, yet this technique has not been embraced by students in the program nor by practitioners in the workforce. Students entering the workforce tend to follow office protocol and are limited in confidence to introduce technologies learned in the curriculum. This project was designed to help students gain confidence in newer skills and encourage private practice settings to adopt newer technologies for patient care. Our program recently introduced air polishing earlier in the program before the rubber cup technique to determine if students would embrace the technology to become leading-edge professionals when they enter the marketplace. Methods: The class of 2022 was taught the traditional method of polishing in the first-year curriculum and air polishing in the second-year curriculum. The class of 2023 will be taught the air polishing method in the first-year curriculum and the traditional method of polishing in the second-year curriculum. Pre- and post-graduation survey data will be collected from both cohorts. Descriptive statistics and pre and post-paired t-tests with alpha set at .05 to compare pre and post-survey results will be used to assess data. Results: This study is currently in progress, with a completion date of October 2023. The class of 2022 completed the pre-graduation survey in the spring of 2022. The post-gradation survey will be sent out in October 2022. The class of 2023 cohort will be surveyed in the spring of 2023 and October 2023. Conclusion: Our hypothesis is students who are taught air polishing first will be more inclined to adopt that skill in private practice, thereby embracing newer technology and improving oral health care.

Keywords: luggage handling system at world’s largest pilgrimage center

Procedia PDF Downloads 89
16074 A Nonlinear Feature Selection Method for Hyperspectral Image Classification

Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo

Abstract:

For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.

Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine

Procedia PDF Downloads 253
16073 A Method to Assess Aspect of Sustainable Development: Walkability

Authors: Amna Ali Al-Saadi, Riken Homma, Kazuhisa Iki

Abstract:

Despite the fact that many places have successes in achieving some aspects of sustainable urban development, there are no scientific facts to convince decision makers. Also, each of them was developed to fulfill the need of specific city only. Therefore, objective method to generate the solutions from a successful case is the aim of this research. The questions were: how to learn the lesson from each case study; how to distinguish the potential criteria and negative one; snd how to quantify their effects in the future development. Walkability has been selected as a goal. This is because it has been found as a solution to achieve healthy life style as well as social, environmental and economic sustainability. Moreover, it has complication as every aspect of sustainable development. This research is stand on quantitative- comparative methodology in order to assess pedestrian oriented development. Three analyzed area (AAs) were selected. One site is located in Oman in which hypotheses as motorized oriented development, while two sites are in Japan where the development is pedestrian friendly. The study used Multi- criteria evaluation method (MCEM). Initially, MCEM stands on analytic hierarchy process (AHP). The later was structured into main goal (walkability), objectives (functions and layout) and attributes (the urban form criteria). Secondly, the GIS were used to evaluate the attributes in multi-criteria maps. Since each criterion has different scale of measurement, all results were standardized by z-score and used to measure the co-relations among criteria. As results, different scenario was generated from each AA. MCEM (AHP-OWA)-GIS measured the walkability score and determined the priority of criteria development in the non-walker friendly environment. The comparison criteria for z-score presented a measurable distinguished orientation of development. This result has been used to prove that Oman is motorized environment while Japan is walkable. Also, it defined the powerful criteria and week criteria regardless to the AA. This result has been used to generalize the priority for walkable development. In conclusion, the method was found successful in generate scientific base for policy decisions.

Keywords: walkability, policy decisions, sustainable development, GIS

Procedia PDF Downloads 426
16072 Utilization of Hybrid Teaching Methods to Improve Writing Skills of Undergraduate Students

Authors: Tahira Zaman

Abstract:

The paper intends to discover the utility of hybrid teaching methods to aid undergraduate students to improve their English academic writing skills. A total of 45 undergraduate students were selected randomly from three classes from varying language abilities, with the research design of monitoring and rubrics evaluation as a means of measure. Language skills of the students were upgraded with the help of experiential learning methods using reflective writing technique, guided method in which students were merely directed to correct form of writing techniques along with self-guided method for the students to produce a library research-based article measured through a standardized rubrics provided. The progress of the students was monitored and checked through rubrics and self-evaluation and concluded that a change was observed in the students’ writing abilities.

Keywords: self evaluation, hybrid, self evaluation, reflective writing

Procedia PDF Downloads 149
16071 Some Efficient Higher Order Iterative Schemes for Solving Nonlinear Systems

Authors: Sandeep Singh

Abstract:

In this article, two classes of iterative schemes are proposed for approximating solutions of nonlinear systems of equations whose orders of convergence are six and eight respectively. Sixth order scheme requires the evaluation of two vector-functions, two first Fr'echet derivatives and three matrices inversion per iteration. This three-step sixth-order method is further extended to eighth-order method which requires one more step and the evaluation of one extra vector-function. Moreover, computational efficiency is compared with some other recently published methods in which we found, our methods are more efficient than existing numerical methods for higher and medium size nonlinear system of equations. Numerical tests are performed to validate the proposed schemes.

Keywords: Nonlinear systems, Computational complexity, order of convergence, Jarratt-type scheme

Procedia PDF Downloads 120
16070 Numerical Investigation of Fluid Outflow through a Retinal Hole after Scleral Buckling

Authors: T. Walczak, J. K. Grabski, P. Fritzkowski, M. Stopa

Abstract:

Objectives of the study are i) to perform numerical simulations that permit an analysis of the dynamics of subretinal fluid when an implant has induced scleral intussusception and ii) assess the impact of the physical parameters of the model on the flow rate. Computer simulations were created using finite element method (FEM) based on a model that takes into account the interaction of a viscous fluid (subretinal fluid) with a hyperelastic body (retina). The purpose of the calculation was to investigate the dependence of the flow rate of subretinal fluid through a hole in the retina on different factors such as viscosity of subretinal fluid, material parameters of the retina, and the offset of the implant from the retina’s hole. These simulations were performed for different speeds of eye movement that reflect the behavior of the eye when reading, REM, and saccadic movements. Similar to other works in the field of subretinal fluid flow, it was assumed stationary, single sided, forced fluid flow in the considered area simulating the subretinal space. Additionally, a hyperelastic material model of the retina and parameterized geometry of the considered model was adopted. The calculations also examined the influence the direction of the force of gravity due to the position of the patient’s head on the trend of outflow of fluid. The simulations revealed that fluid outflow from the retina becomes significant with eyeball movement speed of 100°/sec. This speed is greater than in the case of reading but is four times less than saccadic movement. The increase of viscosity of the fluid increased beneficial effect. Further, the simulation results suggest that moderate eye movement speed is optimal and that the conventional prescription of the avoidance of routine eye movement following retinal detachment surgery should be relaxed. Additionally, to verify numerical results, some calculations were repeated with use of meshless method (method of fundamental solutions), which is relatively fast and easy to implement. The paper has been supported by 02/21/DSPB/3477 grant.

Keywords: CFD simulations, FEM analysis, meshless method, retinal detachment

Procedia PDF Downloads 331