Search results for: Capital Employed
364 Incorporating Semantic Similarity Measure in Genetic Algorithm : An Approach for Searching the Gene Ontology Terms
Authors: Razib M. Othman, Safaai Deris, Rosli M. Illias, Hany T. Alashwal, Rohayanti Hassan, FarhanMohamed
Abstract:
The most important property of the Gene Ontology is the terms. These control vocabularies are defined to provide consistent descriptions of gene products that are shareable and computationally accessible by humans, software agent, or other machine-readable meta-data. Each term is associated with information such as definition, synonyms, database references, amino acid sequences, and relationships to other terms. This information has made the Gene Ontology broadly applied in microarray and proteomic analysis. However, the process of searching the terms is still carried out using traditional approach which is based on keyword matching. The weaknesses of this approach are: ignoring semantic relationships between terms, and highly depending on a specialist to find similar terms. Therefore, this study combines semantic similarity measure and genetic algorithm to perform a better retrieval process for searching semantically similar terms. The semantic similarity measure is used to compute similitude strength between two terms. Then, the genetic algorithm is employed to perform batch retrievals and to handle the situation of the large search space of the Gene Ontology graph. The computational results are presented to show the effectiveness of the proposed algorithm.Keywords: Gene Ontology, Semantic similarity measure, Genetic algorithm, Ontology search
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490363 A TRIZ-based Approach to Generation of Service-supporting Product Concepts
Authors: Seungkyum Kim, Yongtae Park
Abstract:
Recently, business environment and customer needs have become rapidly changing, hence it is very difficult to fulfill sophisticated customer needs by product or service innovation only. In practice, to cope with this problem, various manufacturing companies have developed services to combine with their products. Along with this, many academic studies on PSS (Product Service System) which is the integrated system of products and services have been conducted from the viewpoint of manufacturers. On the other hand, service providers are also attempting to develop service-supporting products to increase their service competitiveness and provide differentiated value. However, there is a lack of research based on the service-centric point of view. Accordingly, this paper proposes a concept generation method for service-supporting product development from the service-centric point of view. This method is designed to be executed in five consecutive steps: situation analysis, problem definition, problem resolution, solution evaluation, and concept generation. In the proposed approach, some tools of TRIZ (Theory of Solving Inventive Problem) such as ISQ (Innovative Situation Questionnaire) and 40 inventive principles are employed in order to define problems of the current services and solve them by generating service-supporting product concepts. This research contributes to the development of service-supporting products and service-centric PSSs.Keywords: TRIZ, PSS (Product Service System), service-supporting product, concept generation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1928362 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution
Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang
Abstract:
Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.Keywords: Parallel Compressor Model (PCM), Revised Calculation Method, Inlet Distortion, Outlet Unequal Pressure Distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1688361 Hot Workability of High Strength Low Alloy Steels
Authors: Seok Hong Min, Jung Ho Moon, Woo Young Jung, Tae Kwon Ha
Abstract:
The hot deformation behavior of high strength low alloy (HSLA) steels with different chemical compositions under hot working conditions in the temperature range of 900 to 1100℃ and strain rate range from 0.1 to 10 s-1 has been studied by performing a series of hot compression tests. The dynamic materials model has been employed for developing the processing maps, which show variation of the efficiency of power dissipation with temperature and strain rate. Also the Kumar-s model has been used for developing the instability map, which shows variation of the instability for plastic deformation with temperature and strain rate. The efficiency of power dissipation increased with decreasing strain rate and increasing temperature in the steel with higher Cr and Ti content. High efficiency of power dissipation over 20 % was obtained at a finite strain level of 0.1 under the conditions of strain rate lower than 1 s-1 and temperature higher than 1050 ℃ . Plastic instability was expected in the regime of temperatures lower than 1000 ℃ and strain rate lower than 0.3 s-1. Steel with lower Cr and Ti contents showed high efficiency of power dissipation at higher strain rate and lower temperature conditions.Keywords: High strength low alloys steels, hot workability, Dynamic materials model, Processing maps.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2019360 Seismic Behaviour of Steel Frames Investigation with Knee Brace Based on Pushover Analysis
Authors: Mahmoud Miri, Abdolreza Zare, Hossein Abbas zadeh
Abstract:
The knee bracing steel frame (KBF) is a new kind of energy dissipating frame, which combines excellent ductility and lateral stiffness. In this framing system, a special form of diagonal brace connected to a knee element instead of beam-column joint, is investigated. Recently, a similar system was proposed and named as chevron knee bracing system (CKB) which in comparison with the former system has a better energy absorption characteristic and at the same time retains the elastic nature of the structures. Knee bracing can provide a stiffer bracing system but reduces the ductility of the steel frame. Chevron knee bracing can be employed to provide the desired ductility level for a design. In this article, relation between seismic performance and structural parameters of the two above mentioned systems are investigated and compared. Frames with similar dimensions but various heights in both systems are designed according to Iranian code of practice for seismic resistant design of building, and then based on a non-linear push over static analysis; the seismic parameters such as behavior factor and performance levels are compared.
Keywords: Seismic behaviour, ordinary knee bracing frame, Chevron knee brace, behaviour factor, performance level.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4257359 Investigating Crime Hotspot Places and their Implication to Urban Environmental Design: A Geographic Visualization and Data Mining Approach
Authors: Donna R. Tabangin, Jacqueline C. Flores, Nelson F. Emperador
Abstract:
Information is power. Geographical information is an emerging science that is advancing the development of knowledge to further help in the understanding of the relationship of “place" with other disciplines such as crime. The researchers used crime data for the years 2004 to 2007 from the Baguio City Police Office to determine the incidence and actual locations of crime hotspots. Combined qualitative and quantitative research methodology was employed through extensive fieldwork and observation, geographic visualization with Geographic Information Systems (GIS) and Global Positioning Systems (GPS), and data mining. The paper discusses emerging geographic visualization and data mining tools and methodologies that can be used to generate baseline data for environmental initiatives such as urban renewal and rejuvenation. The study was able to demonstrate that crime hotspots can be computed and were seen to be occurring to some select places in the Central Business District (CBD) of Baguio City. It was observed that some characteristics of the hotspot places- physical design and milieu may play an important role in creating opportunities for crime. A list of these environmental attributes was generated. This derived information may be used to guide the design or redesign of the urban environment of the City to be able to reduce crime and at the same time improve it physically.Keywords: Crime mapping, data mining, environmental design, geographic visualization, GIS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2623358 Metabolomics Profile Recognition for Cancer Diagnostics
Authors: Valentina L. Kouznetsova, Jonathan W. Wang, Igor F. Tsigelny
Abstract:
Metabolomics has become a rising field of research for various diseases, particularly cancer. Increases or decreases in metabolite concentrations in the human body are indicative of various cancers. Further elucidation of metabolic pathways and their significance in cancer research may greatly spur medicinal discovery. We analyzed the metabolomics profiles of lung cancer. Thirty-three metabolites were selected as significant. These metabolites are involved in 37 metabolic pathways delivered by MetaboAnalyst software. The top pathways are glyoxylate and dicarboxylate pathway (its hubs are formic acid and glyoxylic acid) along with Citrate cycle pathway followed by Taurine and hypotaurine pathway (the hubs in the latter are taurine and sulfoacetaldehyde) and Glycine, serine, and threonine pathway (the hubs are glycine and L-serine). We studied interactions of the metabolites with the proteins involved in cancer-related signaling networks, and developed an approach to metabolomics biomarker use in cancer diagnostics. Our analysis showed that a significant part of lung-cancer-related metabolites interacts with main cancer-related signaling pathways present in this network: PI3K–mTOR–AKT pathway, RAS–RAF–ERK1/2 pathway, and NFKB pathway. These results can be employed for use of metabolomics profiles in elucidation of the related cancer proteins signaling networks.Keywords: Cancer, metabolites, metabolic pathway, signaling pathway.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391357 UPFC Supplementary Controller Design Using Real-Coded Genetic Algorithm for Damping Low Frequency Oscillations in Power Systems
Authors: A.K. Baliarsingh, S. Panda, A.K. Mohanty, C. Ardil
Abstract:
This paper presents a systematic approach for designing Unified Power Flow Controller (UPFC) based supplementary damping controllers for damping low frequency oscillations in a single-machine infinite-bus power system. Detailed investigations have been carried out considering the four alternatives UPFC based damping controller namely modulating index of series inverter (mB), modulating index of shunt inverter (mE), phase angle of series inverter (δB ) and phase angle of the shunt inverter (δE ). The design problem of the proposed controllers is formulated as an optimization problem and Real- Coded Genetic Algorithm (RCGA) is employed to optimize damping controller parameters. Simulation results are presented and compared with a conventional method of tuning the damping controller parameters to show the effectiveness and robustness of the proposed design approach.
Keywords: Power System Oscillations, Real-Coded Genetic Algorithm (RCGA), Flexible AC Transmission Systems (FACTS), Unified Power Flow Controller (UPFC), Damping Controller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2084356 On Algebraic Structure of Improved Gauss-Seidel Iteration
Authors: O. M. Bamigbola, A. A. Ibrahim
Abstract:
Analysis of real life problems often results in linear systems of equations for which solutions are sought. The method to employ depends, to some extent, on the properties of the coefficient matrix. It is not always feasible to solve linear systems of equations by direct methods, as such the need to use an iterative method becomes imperative. Before an iterative method can be employed to solve a linear system of equations there must be a guaranty that the process of solution will converge. This guaranty, which must be determined apriori, involve the use of some criterion expressible in terms of the entries of the coefficient matrix. It is, therefore, logical that the convergence criterion should depend implicitly on the algebraic structure of such a method. However, in deference to this view is the practice of conducting convergence analysis for Gauss- Seidel iteration on a criterion formulated based on the algebraic structure of Jacobi iteration. To remedy this anomaly, the Gauss- Seidel iteration was studied for its algebraic structure and contrary to the usual assumption, it was discovered that some property of the iteration matrix of Gauss-Seidel method is only diagonally dominant in its first row while the other rows do not satisfy diagonal dominance. With the aid of this structure we herein fashion out an improved version of Gauss-Seidel iteration with the prospect of enhancing convergence and robustness of the method. A numerical section is included to demonstrate the validity of the theoretical results obtained for the improved Gauss-Seidel method.
Keywords: Linear system of equations, Gauss-Seidel iteration, algebraic structure, convergence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2930355 Load Discontinuity in Shock Response and Its Remedies
Authors: Shuenn-Yih Chang, Chiu-Li Huang
Abstract:
It has been shown that a load discontinuity at the end of an impulse will result in an extra impulse and hence an extra amplitude distortion if a step-by-step integration method is employed to yield the shock response. In order to overcome this difficulty, three remedies are proposed to reduce the extra amplitude distortion. The first remedy is to solve the momentum equation of motion instead of the force equation of motion in the step-by-step solution of the shock response, where an external momentum is used in the solution of the momentum equation of motion. Since the external momentum is a resultant of the time integration of external force, the problem of load discontinuity will automatically disappear. The second remedy is to perform a single small time step immediately upon termination of the applied impulse while the other time steps can still be conducted by using the time step determined from general considerations. This is because that the extra impulse caused by a load discontinuity at the end of an impulse is almost linearly proportional to the step size. Finally, the third remedy is to use the average value of the two different values at the integration point of the load discontinuity to replace the use of one of them for loading input. The basic motivation of this remedy originates from the concept of no loading input error associated with the integration point of load discontinuity. The feasibility of the three remedies are analytically explained and numerically illustrated.Keywords: Dynamic analysis, load discontinuity, shock response, step-by-step integration
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1332354 An Evaluation of Kahoot Application and Its Environment as a Learning Tool
Authors: Muhammad Yasir Babar, Ebrahim Panah
Abstract:
Over the past 20 years, internet has seen continual advancement and with the advent of online technology, various types of web-based games have been developed. Games are frequently being used among different age groups from baby boomers to generation Z. Games are not only used for entertainment but also utilized as a learning approach transmitting education to a level that is more interesting and effective for students. One of the popular web-based education games is Kahoot with growing popularity and usage, which is being used in different fields of studies. However, little knowledge is available on university students’ perception of Kahoot environment and application for learning subjects. Hence, the objective of the current study is to investigate students’ perceptions of Kahoot application and environment as a learning tool. The study employed a survey approach by distributing Google Forms –created questionnaire, with high level of reliability index, to 62 students (11 males and 51 females). The findings show that students have positive attitudes towards Kahoot application and its environment for learning. Regarding Kahoot application, it was indicated that activities created using Kahoot are more interesting for students, Kahoot is useful for collaborative learning, and Kahoot enhances interest in learning lesson. In terms of Kahoot environment, it was found that using this application through mobile is easy for students, its design is simple and useful, Kahoot-created activities can easily be shared, and the application can easily be used on any platform. The findings of the study have implications for instructors, policymakers and curriculum developers.
Keywords: Application, environment, Kahoot, learning tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 796353 Design and Optimization for a Compliant Gripper with Force Regulation Mechanism
Authors: Nhat Linh Ho, Thanh-Phong Dao, Shyh-Chour Huang, Hieu Giang Le
Abstract:
This paper presents a design and optimization for a compliant gripper. The gripper is constructed based on the concept of compliant mechanism with flexure hinge. A passive force regulation mechanism is presented to control the grasping force a micro-sized object instead of using a sensor force. The force regulation mechanism is designed using the planar springs. The gripper is expected to obtain a large range of displacement to handle various sized objects. First of all, the statics and dynamics of the gripper are investigated by using the finite element analysis in ANSYS software. And then, the design parameters of the gripper are optimized via Taguchi method. An orthogonal array L9 is used to establish an experimental matrix. Subsequently, the signal to noise ratio is analyzed to find the optimal solution. Finally, the response surface methodology is employed to model the relationship between the design parameters and the output displacement of the gripper. The design of experiment method is then used to analyze the sensitivity so as to determine the effect of each parameter on the displacement. The results showed that the compliant gripper can move with a large displacement of 213.51 mm and the force regulation mechanism is expected to be used for high precision positioning systems.
Keywords: Flexure hinge, compliant mechanism, compliant gripper, force regulation mechanism, Taguchi method, response surface methodology, design of experiment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613352 The Shaping of a Triangle Steel Plate into an Equilateral Vertical Steel by Finite-Element Modeling
Authors: Tsung-Chia Chen
Abstract:
The orthogonal processes to shape the triangle steel plate into a equilateral vertical steel are examined by an incremental elasto-plastic finite-element method based on an updated Lagrangian formulation. The highly non-linear problems due to the geometric changes, the inelastic constitutive behavior and the boundary conditions varied with deformation are taken into account in an incremental manner. On the contact boundary, a modified Coulomb friction mode is specially considered. A weighting factor r-minimum is employed to limit the step size of loading increment to linear relation. In particular, selective reduced integration was adopted to formulate the stiffness matrix. The simulated geometries of verticality could clearly demonstrate the vertical processes until unloading. A series of experiments and simulations were performed to validate the formulation in the theory, leading to the development of the computer codes. The whole deformation history and the distribution of stress, strain and thickness during the forming process were obtained by carefully considering the moving boundary condition in the finite-element method. Therefore, this modeling can be used for judging whether a equilateral vertical steel can be shaped successfully. The present work may be expected to improve the understanding of the formation of the equilateral vertical steel.
Keywords: Elasto-plastic, finite element, orthogonal pressing process, vertical steel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1352351 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework
Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim
Abstract:
Background modeling and subtraction in video analysis has been widely used as an effective method for moving objects detection in many computer vision applications. Recently, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are the most frequently occurred problems in the practical situation. This paper presents a favorable two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean value of each RGB color channel. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the output of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate very competitive performance compared to previous models.Keywords: Background subtraction, codebook model, local binary pattern, dynamic background, illumination changes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965350 A Novel VLSI Architecture of Hybrid Image Compression Model based on Reversible Blockade Transform
Authors: C. Hemasundara Rao, M. Madhavi Latha
Abstract:
Image compression can improve the performance of the digital systems by reducing time and cost in image storage and transmission without significant reduction of the image quality. Furthermore, the discrete cosine transform has emerged as the new state-of-the art standard for image compression. In this paper, a hybrid image compression technique based on reversible blockade transform coding is proposed. The technique, implemented over regions of interest (ROIs), is based on selection of the coefficients that belong to different transforms, depending on the coefficients is proposed. This method allows: (1) codification of multiple kernals at various degrees of interest, (2) arbitrary shaped spectrum,and (3) flexible adjustment of the compression quality of the image and the background. No standard modification for JPEG2000 decoder was required. The method was applied over different types of images. Results show a better performance for the selected regions, when image coding methods were employed for the whole set of images. We believe that this method is an excellent tool for future image compression research, mainly on images where image coding can be of interest, such as the medical imaging modalities and several multimedia applications. Finally VLSI implementation of proposed method is shown. It is also shown that the kernal of Hartley and Cosine transform gives the better performance than any other model.Keywords: VLSI, Discrete Cosine Transform, JPEG, Hartleytransform, Radon Transform
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1836349 A Methodology for Automatic Diversification of Document Categories
Authors: Dasom Kim, Chen Liu, Myungsu Lim, Soo-Hyeon Jeon, Byeoung Kug Jeon, Kee-Young Kwahk, Namgyu Kim
Abstract:
Recently, numerous documents including large volumes of unstructured data and text have been created because of the rapid increase in the use of social media and the Internet. Usually, these documents are categorized for the convenience of users. Because the accuracy of manual categorization is not guaranteed, and such categorization requires a large amount of time and incurs huge costs. Many studies on automatic categorization have been conducted to help mitigate the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorize complex documents with multiple topics because they work on the assumption that individual documents can be categorized into single categories only. Therefore, to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, the learning process employed in these studies involves training using a multi-categorized document set. These methods therefore cannot be applied to the multi-categorization of most documents unless multi-categorized training sets using traditional multi-categorization algorithms are provided. To overcome this limitation, in this study, we review our novel methodology for extending the category of a single-categorized document to multiple categorizes, and then introduce a survey-based verification scenario for estimating the accuracy of our automatic categorization methodology.Keywords: Big Data Analysis, Document Classification, Text Mining, Topic Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1746348 Selective Harmonic Elimination of PWM AC/AC Voltage Controller Using Hybrid RGA-PS Approach
Authors: A. K. Al-Othman, Nabil A. Ahmed, A. M. Al-Kandari, H. K. Ebraheem
Abstract:
Selective harmonic elimination-pulse width modulation techniques offer a tight control of the harmonic spectrum of a given voltage waveform generated by a power electronic converter along with a low number of switching transitions. Traditional optimization methods suffer from various drawbacks, such as prolonged and tedious computational steps and convergence to local optima; thus, the more the number of harmonics to be eliminated, the larger the computational complexity and time. This paper presents a novel method for output voltage harmonic elimination and voltage control of PWM AC/AC voltage converters using the principle of hybrid Real-Coded Genetic Algorithm-Pattern Search (RGA-PS) method. RGA is the primary optimizer exploiting its global search capabilities, PS is then employed to fine tune the best solution provided by RGA in each evolution. The proposed method enables linear control of the fundamental component of the output voltage and complete elimination of its harmonic contents up to a specified order. Theoretical studies have been carried out to show the effectiveness and robustness of the proposed method of selective harmonic elimination. Theoretical results are validated through simulation studies using PSIM software package.Keywords: PWM, AC/AC voltage converters, selectiveharmonic elimination, direct search method, pattern search method, Real-coded Genetic algorithms, evolutionary algorithms andoptimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3318347 The Impact of Local Decision-Making in Regional Development Schemes on the Achievement of Efficiency in EU Funds
Authors: Kuyucu Helvacioglu Asli Deniz, Tektas Arzu
Abstract:
European Union candidate status provides a strong motivation for decision-making in the candidate countries in shaping the regional development policy where there is an envisioned transfer of power from center to the periphery. The process of Europeanization anticipates the candidate countries configure their regional institutional templates in the context of the requirements of the European Union policies and introduces new instruments of incentive framework of enlargement to be employed in regional development schemes. It is observed that the contribution of the local actors to the decision making in the design of the allocation architectures enhances the efficiency of the funds and increases the positive effects of the projects funded under the regional development objectives. This study aims at exploring the performances of the three regional development grant schemes in Turkey, established and allocated under the pre-accession process with a special emphasis given to the roles of the national and local actors in decision-making for regional development. Efficiency analyses have been conducted using the DEA methodology which has proved to be a superior method in comparative efficiency and benchmarking measurements. The findings of this study as parallel to similar international studies, provides that the participation of the local actors to the decision-making in funding contributes both to the quality and the efficiency of the projects funded under the EU schemes.Keywords: Efficiency, European Union Funds, RegionalDevelopment, Turkey
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643346 The Islamic Element of Al-‘Adl in Critical Thinking: the Perception of Muslim Engineering Undergraduates in Malaysia
Authors: Mohd Nuri Al-Amin Endut, Wan Suhaimi Wan Abdullah, Zulqarnain Abu Bakar
Abstract:
The element of justice or al-‘adl in the context of Islamic critical thinking deals with the notion of justice in a thinking process which critically rationalizes the truth in a fair and objective manner with no irrelevant interference that can jeopardize a sound judgment. This Islamic axiological element is vital in technological decision making as it addresses the issues of religious values and ethics that are primarily set to fulfill the purpose of human life on earth. The main objective of this study was to examine and analyze the perception of Muslim engineering students in Malaysian higher education institutions towards the concept of al-‘adl as an essential element of Islamic critical thinking. The study employed mixed methods approach that comprises data collection from the questionnaire survey and the interview responses. A total of 557 Muslim engineering undergraduates from six Malaysian universities participated in the study. The study generally indicated that Muslim engineering undergraduates in the higher institutions have rather good comprehension and consciousness for al-‘adl with a slight awareness on the importance of objective thinking. Nonetheless there were a few items on the concept that have implied a comparatively low perception on the rational justice in Islam as the means to grasp the ultimate truth.Keywords: Engineering education, Islamic critical thinking, rational justice, perception, tertiary education.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2439345 A Weighted-Profiling Using an Ontology Basefor Semantic-Based Search
Authors: Hikmat A. M. Abd-El-Jaber, Tengku M. T. Sembok
Abstract:
The information on the Web increases tremendously. A number of search engines have been developed for searching Web information and retrieving relevant documents that satisfy the inquirers needs. Search engines provide inquirers irrelevant documents among search results, since the search is text-based rather than semantic-based. Information retrieval research area has presented a number of approaches and methodologies such as profiling, feedback, query modification, human-computer interaction, etc for improving search results. Moreover, information retrieval has employed artificial intelligence techniques and strategies such as machine learning heuristics, tuning mechanisms, user and system vocabularies, logical theory, etc for capturing user's preferences and using them for guiding the search based on the semantic analysis rather than syntactic analysis. Although a valuable improvement has been recorded on search results, the survey has shown that still search engines users are not really satisfied with their search results. Using ontologies for semantic-based searching is likely the key solution. Adopting profiling approach and using ontology base characteristics, this work proposes a strategy for finding the exact meaning of the query terms in order to retrieve relevant information according to user needs. The evaluation of conducted experiments has shown the effectiveness of the suggested methodology and conclusion is presented.Keywords: information retrieval, user profiles, semantic Web, ontology, search engine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3217344 Comparison of Automated Zone Design Census Output Areas with Existing Output Areas in South Africa
Authors: T. Mokhele, O. Mutanga, F. Ahmed
Abstract:
South Africa is one of the few countries that have stopped using the same Enumeration Areas (EAs) for census enumeration and dissemination. The advantage of this change is that confidentiality issue could be addressed for census dissemination as the design of geographic unit for collection is mainly to ensure that this unit is covered by one enumerator. The objective of this paper was to evaluate the performance of automated zone design output areas against non-zone design developed geographies using the 2001 census data, and 2011 census to some extent, as the main input. The comparison of the Automated Zone-design Tool (AZTool) census output areas with the Small Area Layers (SALs) and SubPlaces based on confidentiality limit, population distribution, and degree of homogeneity, as well as shape compactness, was undertaken. Further, SPSS was employed for validation of the AZTool output results. The results showed that AZTool developed output areas out-perform the existing official SAL and SubPlaces with regard to minimum population threshold, population distribution and to some extent to homogeneity. Therefore, it was concluded that AZTool program provides a new alternative to the creation of optimised census output areas for dissemination of population census data in South Africa.Keywords: AZTool, enumeration areas, small areal layers, South Africa.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 750343 Potential of Tourism Logistic Service Business in the Border Areas of Chong Anma, Chong Sa-Ngam, and Chong Jom Checkpoints in Thailand to Increase Competitive Efficiency among the ASEAN Community
Authors: Pariwat Somnuek
Abstract:
This study focused on tourism logistic services in the border areas of Thailand by an analysis and comparison of the opinions of tourists, villagers, and entrepreneurs of these services. Sample representatives of this study were a total of 600 villagers and 15 entrepreneurs in the three border areas consisting of Chong Anma, Chong Sa-Ngam, and Chong Jom checkpoints. For methodology, survey questionnaires, situation analysis, TOWS matrix, and focus group discussions were used for data collection, as well as descriptive analysis and statistics such as arithmetic means and standard deviations, were employed for data analysis. The findings revealed that business potential was at the medium level and entrepreneurs were satisfied with their turnovers. However, perspectives of transportation and tourism services provided for tourists need to be immediately improved. Recommendations for the potential development included promotion of border tourism destinations and foreign investments into accommodation, restaurants, and transport, as well as the establishment of business networks between Thailand and Cambodia, along with the introduction of new tourism destinations by co-operation between entrepreneurs in both countries. These initiatives may lead to increased visitors, collaboration of security offices, and an improved image of tourism security.
Keywords: Business potential, potential development, tourism logistics, services.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1046342 Applying the Extreme-Based Teaching Model in Post-Secondary Online Classroom Setting: A Field Experiment
Authors: Leon Pan
Abstract:
The first programming course within post-secondary education has long been recognized as a challenging endeavor for both educators and students alike. Historically, these courses have exhibited high failure rates and a notable number of dropouts. Instructors often lament students' lack of effort on their coursework, and students often express frustration that the teaching methods employed are not effective. Drawing inspiration from the successful principles of Extreme Programming, this study introduces an approach—the Extremes-based teaching model—aimed at enhancing the teaching of introductory programming courses. To empirically determine the effectiveness of the model, a comparison was made between a section taught using the extreme-based model and another utilizing traditional teaching methods. Notably, the extreme-based teaching class required students to work collaboratively on projects, while also demanding continuous assessment and performance enhancement within groups. This paper details the application of the extreme-based model within the post-secondary online classroom context and presents the compelling results that emphasize its effectiveness in advancing the teaching and learning experiences. The extreme-based model led to a significant increase of 13.46 points in the weighted total average and a commendable 10% reduction in the failure rate.
Keywords: Extreme-based teaching model, innovative pedagogical methods, project-based learning, team-based learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 130341 Evolutionary Techniques for Model Order Reduction of Large Scale Linear Systems
Authors: S. Panda, J. S. Yadav, N. P. Patidar, C. Ardil
Abstract:
Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. The GA has been popular in academia and the industry mainly because of its intuitiveness, ease of implementation, and the ability to effectively solve highly non-linear, mixed integer optimization problems that are typical of complex engineering systems. PSO technique is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. In this paper both PSO and GA optimization are employed for finding stable reduced order models of single-input- single-output large-scale linear systems. Both the techniques guarantee stability of reduced order model if the original high order model is stable. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example from literature and the results are compared with recently published conventional model reduction technique.
Keywords: Genetic Algorithm, Particle Swarm Optimization, Order Reduction, Stability, Transfer Function, Integral Squared Error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2722340 Comparison of Particle Swarm Optimization and Genetic Algorithm for TCSC-based Controller Design
Authors: Sidhartha Panda, N. P. Padhy
Abstract:
Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. Since the two approaches are supposed to find a solution to a given objective function but employ different strategies and computational effort, it is appropriate to compare their performance. This paper presents the application and performance comparison of PSO and GA optimization techniques, for Thyristor Controlled Series Compensator (TCSC)-based controller design. The design objective is to enhance the power system stability. The design problem of the FACTS-based controller is formulated as an optimization problem and both the PSO and GA optimization techniques are employed to search for optimal controller parameters. The performance of both optimization techniques in terms of computational time and convergence rate is compared. Further, the optimized controllers are tested on a weakly connected power system subjected to different disturbances, and their performance is compared with the conventional power system stabilizer (CPSS). The eigenvalue analysis and non-linear simulation results are presented and compared to show the effectiveness of both the techniques in designing a TCSC-based controller, to enhance power system stability.
Keywords: Thyristor Controlled Series Compensator, geneticalgorithm; particle swarm optimization; Phillips-Heffron model;power system stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3153339 Discrete and Stationary Adaptive Sub-Band Threshold Method for Improving Image Resolution
Authors: P. Joyce Beryl Princess, Y. Harold Robinson
Abstract:
Image Processing is a structure of Signal Processing for which the input is the image and the output is also an image or parameter of the image. Image Resolution has been frequently referred as an important aspect of an image. In Image Resolution Enhancement, images are being processed in order to obtain more enhanced resolution. To generate highly resoluted image for a low resoluted input image with high PSNR value. Stationary Wavelet Transform is used for Edge Detection and minimize the loss occurs during Downsampling. Inverse Discrete Wavelet Transform is to get highly resoluted image. Highly resoluted output is generated from the Low resolution input with high quality. Noisy input will generate output with low PSNR value. So Noisy resolution enhancement technique has been used for adaptive sub-band thresholding is used. Downsampling in each of the DWT subbands causes information loss in the respective subbands. SWT is employed to minimize this loss. Inverse Discrete wavelet transform (IDWT) is to convert the object which is downsampled using DWT into a highly resoluted object. Used Image denoising and resolution enhancement techniques will generate image with high PSNR value. Our Proposed method will improve Image Resolution and reached the optimized threshold.Keywords: Image Processing, Inverse Discrete wavelet transform, PSNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1790338 Catchment Yield Prediction in an Ungauged Basin Using PyTOPKAPI
Authors: B. S. Fatoyinbo, D. Stretch, O. T. Amoo, D. Allopi
Abstract:
This study extends the use of the Drainage Area Regionalization (DAR) method in generating synthetic data and calibrating PyTOPKAPI stream yield for an ungauged basin at a daily time scale. The generation of runoff in determining a river yield has been subjected to various topographic and spatial meteorological variables, which integers form the Catchment Characteristics Model (CCM). Many of the conventional CCM models adapted in Africa have been challenged with a paucity of adequate, relevance and accurate data to parameterize and validate the potential. The purpose of generating synthetic flow is to test a hydrological model, which will not suffer from the impact of very low flows or very high flows, thus allowing to check whether the model is structurally sound enough or not. The employed physically-based, watershed-scale hydrologic model (PyTOPKAPI) was parameterized with GIS-pre-processing parameters and remote sensing hydro-meteorological variables. The validation with mean annual runoff ratio proposes a decent graphical understanding between observed and the simulated discharge. The Nash-Sutcliffe efficiency and coefficient of determination (R²) values of 0.704 and 0.739 proves strong model efficiency. Given the current climate variability impact, water planner can now assert a tool for flow quantification and sustainable planning purposes.
Keywords: Ungauged Basin, Catchment Characteristics Model, Synthetic data, GIS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1311337 Numerical Simulation of Heat Exchanger Area of R410A-R23 and R404A-R508B Cascade Refrigeration System at Various Evaporating and Condensing Temperature
Authors: A. D. Parekh, P. R. Tailor
Abstract:
Capacity and efficiency of any refrigerating system diminish rapidly as the difference between the evaporating and condensing temperature is increased by reduction in the evaporator temperature. The single stage vapour compression refrigeration system is limited to an evaporator temperature of -40 0C. Below temperature of -40 0C the either cascade refrigeration system or multi stage vapour compression system is employed. Present work describes thermal design of main three heat exchangers namely condenser (HTS), cascade condenser and evaporator (LTS) of R404A-R508B and R410A-R23 cascade refrigeration system. Heat transfer area of condenser (HTS), cascade condenser and evaporator (LTS) for both systems have been compared and the effect of condensing and evaporating temperature on heat-transfer area for both systems have been studied under same operating condition. The results shows that the required heat-transfer area of condenser and cascade condenser for R410A-R23 cascade system is lower than the R404A-R508B cascade system but heat transfer area of evaporator is similar for both the system. The heat transfer area of condenser and cascade condenser decreases with increase in condensing temperature (Tc), whereas the heat transfer area of cascade condenser and evaporator increases with increase in evaporating temperature (Te).Keywords: Heat-transfer area, R410A, R404A, R508B, R23, Refrigeration system, Thermal design
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2439336 Parametric Analysis and Optimal Design of Functionally Graded Plates Using Particle Swarm Optimization Algorithm and a Hybrid Meshless Method
Authors: Foad Nazari, Seyed Mahmood Hosseini, Mohammad Hossein Abolbashari, Mohammad Hassan Abolbashari
Abstract:
The present study is concerned with the optimal design of functionally graded plates using particle swarm optimization (PSO) algorithm. In this study, meshless local Petrov-Galerkin (MLPG) method is employed to obtain the functionally graded (FG) plate’s natural frequencies. Effects of two parameters including thickness to height ratio and volume fraction index on the natural frequencies and total mass of plate are studied by using the MLPG results. Then the first natural frequency of the plate, for different conditions where MLPG data are not available, is predicted by an artificial neural network (ANN) approach which is trained by back-error propagation (BEP) technique. The ANN results show that the predicted data are in good agreement with the actual one. To maximize the first natural frequency and minimize the mass of FG plate simultaneously, the weighted sum optimization approach and PSO algorithm are used. However, the proposed optimization process of this study can provide the designers of FG plates with useful data.Keywords: Optimal design, natural frequency, FG plate, hybrid meshless method, MLPG method, ANN approach, particle swarm optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1433335 Optimization of Proton Exchange Membrane Fuel Cell Parameters Based on Modified Particle Swarm Algorithms
Authors: M. Dezvarei, S. Morovati
Abstract:
In recent years, increasing usage of electrical energy provides a widespread field for investigating new methods to produce clean electricity with high reliability and cost management. Fuel cells are new clean generations to make electricity and thermal energy together with high performance and no environmental pollution. According to the expansion of fuel cell usage in different industrial networks, the identification and optimization of its parameters is really significant. This paper presents optimization of a proton exchange membrane fuel cell (PEMFC) parameters based on modified particle swarm optimization with real valued mutation (RVM) and clonal algorithms. Mathematical equations of this type of fuel cell are presented as the main model structure in the optimization process. Optimized parameters based on clonal and RVM algorithms are compared with the desired values in the presence and absence of measurement noise. This paper shows that these methods can improve the performance of traditional optimization methods. Simulation results are employed to analyze and compare the performance of these methodologies in order to optimize the proton exchange membrane fuel cell parameters.Keywords: Clonal algorithm, proton exchange membrane fuel cell, particle swarm optimization, real valued mutation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1180