Search results for: Hebbian learning rule.
882 Performance Evaluation of a Prioritized, Limited Multi-Server Processor-Sharing System That Includes Servers with Various Capacities
Authors: Yoshiaki Shikata, Nobutane Hanayama
Abstract:
We present a prioritized, limited multi-server processor sharing (PS) system where each server has various capacities, and N (≥2) priority classes are allowed in each PS server. In each prioritized, limited server, different service ratio is assigned to each class request, and the number of requests to be processed is limited to less than a certain number. Routing strategies of such prioritized, limited multi-server PS systems that take into account the capacity of each server are also presented, and a performance evaluation procedure for these strategies is discussed. Practical performance measures of these strategies, such as loss probability, mean waiting time, and mean sojourn time, are evaluated via simulation. In the PS server, at the arrival (or departure) of a request, the extension (shortening) of the remaining sojourn time of each request receiving service can be calculated by using the number of requests of each class and the priority ratio. Utilising a simulation program which executes these events and calculations, the performance of the proposed prioritized, limited multi-server PS rule can be analyzed. From the evaluation results, most suitable routing strategy for the loss or waiting system is clarified.
Keywords: Processor sharing, multi-server, various capacity, N priority classes, routing strategy, loss probability, mean sojourn time, mean waiting time, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1035881 A Comparative Analysis of Machine Learning Techniques for PM10 Forecasting in Vilnius
Authors: M. A. S. Fahim, J. Sužiedelytė Visockienė
Abstract:
With the growing concern over air pollution (AP), it is clear that this has gained more prominence than ever before. The level of consciousness has increased and a sense of knowledge now has to be forwarded as a duty by those enlightened enough to disseminate it to others. This realization often comes after an understanding of how poor air quality indices (AQI) damage human health. The study focuses on assessing air pollution prediction models specifically for Lithuania, addressing a substantial need for empirical research within the region. Concentrating on Vilnius, it specifically examines particulate matter concentrations 10 micrometers or less in diameter (PM10). Utilizing Gaussian Process Regression (GPR) and Regression Tree Ensemble, and Regression Tree methodologies, predictive forecasting models are validated and tested using hourly data from January 2020 to December 2022. The study explores the classification of AP data into anthropogenic and natural sources, the impact of AP on human health, and its connection to cardiovascular diseases. The study revealed varying levels of accuracy among the models, with GPR achieving the highest accuracy, indicated by an RMSE of 4.14 in validation and 3.89 in testing.
Keywords: Air pollution, anthropogenic and natural sources, machine learning, Gaussian process regression, tree ensemble, forecasting models, particulate matter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 117880 Solving Process Planning, Weighted Earliest Due Date Scheduling and Weighted Due Date Assignment Using Simulated Annealing and Evolutionary Strategies
Authors: Halil Ibrahim Demir, Abdullah Hulusi Kokcam, Fuat Simsir, Özer Uygun
Abstract:
Traditionally, three important manufacturing functions which are process planning, scheduling and due-date assignment are performed sequentially and separately. Although there are numerous works on the integration of process planning and scheduling and plenty of works focusing on scheduling with due date assignment, there are only a few works on integrated process planning, scheduling and due-date assignment. Although due-dates are determined without taking into account of weights of the customers in the literature, here weighted due-date assignment is employed to get better performance. Jobs are scheduled according to weighted earliest due date dispatching rule and due dates are determined according to some popular due date assignment methods by taking into account of the weights of each job. Simulated Annealing, Evolutionary Strategies, Random Search, hybrid of Random Search and Simulated Annealing, and hybrid of Random Search and Evolutionary Strategies, are applied as solution techniques. Three important manufacturing functions are integrated step-by-step and higher integration levels are found better. Search meta-heuristics are found to be very useful while improving performance measure.
Keywords: Evolutionary strategies, hybrid searches, process planning, simulated annealing, weighted due-date assignment, weighted scheduling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1158879 The Application of Real Options to Capital Budgeting
Authors: George Yungchih Wang
Abstract:
Real options theory suggests that managerial flexibility embedded within irreversible investments can account for a significant value in project valuation. Although the argument has become the dominant focus of capital investment theory over decades, yet recent survey literature in capital budgeting indicates that corporate practitioners still do not explicitly apply real options in investment decisions. In this paper, we explore how real options decision criteria can be transformed into equivalent capital budgeting criteria under the consideration of uncertainty, assuming that underlying stochastic process follows a geometric Brownian motion (GBM), a mixed diffusion-jump (MX), or a mean-reverting process (MR). These equivalent valuation techniques can be readily decomposed into conventional investment rules and “option impacts", the latter of which describe the impacts on optimal investment rules with the option value considered. Based on numerical analysis and Monte Carlo simulation, three major findings are derived. First, it is shown that real options could be successfully integrated into the mindset of conventional capital budgeting. Second, the inclusion of option impacts tends to delay investment. It is indicated that the delay effect is the most significant under a GBM process and the least significant under a MR process. Third, it is optimal to adopt the new capital budgeting criteria in investment decision-making and adopting a suboptimal investment rule without considering real options could lead to a substantial loss in value.
Keywords: real options, capital budgeting, geometric Brownianmotion, mixed diffusion-jump, mean-reverting process
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2770878 Blind Image Deconvolution by Neural Recursive Function Approximation
Authors: Jiann-Ming Wu, Hsiao-Chang Chen, Chun-Chang Wu, Pei-Hsun Hsu
Abstract:
This work explores blind image deconvolution by recursive function approximation based on supervised learning of neural networks, under the assumption that a degraded image is linear convolution of an original source image through a linear shift-invariant (LSI) blurring matrix. Supervised learning of neural networks of radial basis functions (RBF) is employed to construct an embedded recursive function within a blurring image, try to extract non-deterministic component of an original source image, and use them to estimate hyper parameters of a linear image degradation model. Based on the estimated blurring matrix, reconstruction of an original source image from a blurred image is further resolved by an annealed Hopfield neural network. By numerical simulations, the proposed novel method is shown effective for faithful estimation of an unknown blurring matrix and restoration of an original source image.
Keywords: Blind image deconvolution, linear shift-invariant(LSI), linear image degradation model, radial basis functions (rbf), recursive function, annealed Hopfield neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2061877 Digital Learning and Entrepreneurship Education: Changing Paradigms
Authors: Shivangi Agrawal, Hsiu-I Ting
Abstract:
Entrepreneurship is an essential source of economic growth and a prominent factor influencing socio-economic development. Entrepreneurship education educates and enhances entrepreneurial activity. This study aims to understand current trends in entrepreneurship education and evaluate the effectiveness of diverse entrepreneurship education programs. An increasing number of universities offer entrepreneurship education courses to create and successfully continue entrepreneurial ventures. Despite the prevalence of entrepreneurship education, research studies lack inconsistency about the effectiveness of entrepreneurship education to promote and develop entrepreneurship. Strategies to develop entrepreneurial attitudes and intentions among individuals are hindered by a lack of understanding of entrepreneurs' educational purposes, components, methodology, and resources required. Lack of adequate entrepreneurship education has been linked with low self-efficacy and lack of entrepreneurial intent. Moreover, in the age of digitisation and during the COVID-19 pandemic, digital learning platforms (e.g. online entrepreneurship education courses and programs) and other digital tools (e.g. digital game-based entrepreneurship education) have become more relevant to entrepreneurship education. This paper contributes to the continuation of academic literature in entrepreneurship education by evaluating and assessing current trends in entrepreneurship education programs, leading to better understanding to reduce gaps between entrepreneurial development requirements and higher education institutions.
Keywords: entrepreneurship education, digital technologies, academic entrepreneurship, COVID-19
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1706876 Towards a Deeper Understanding of 21st Century Global Terrorism
Authors: Francis Jegede
Abstract:
This paper examines essential issues relating to the rise and nature of violent extremism involving non-state actors and groups in the early 21st century. The global trends in terrorism and violent extremism are examined in relation to Western governments’ counter terror operations. The paper analyses the existing legal framework for fighting violent extremism and terrorism and highlights the inherent limitations of the current International Law of War in dealing with the growing challenges posed by terrorists and violent extremist groups. The paper discusses how terrorist groups use civilians, women and children as tools and weapon of war to fuel their campaign of terror and suggests ways in which the international community could deal with the challenge of fighting terrorist groups without putting civilians, women and children in harm way. The paper emphasises the need to uphold human rights values and respect for the law of war in our response to global terrorism. The paper poses the question as to whether the current legal framework for dealing with terrorist groups is sufficient without contravening the essential provisions and ethos of the International Law of War and Human Rights. While the paper explains how terrorist groups flagrantly disregard the rule of law and disrespect human rights in their campaign of terror, it also notes instances in which the current Western strategy in fighting terrorism may be viewed or considered as conflicting with human rights and international law.Keywords: Terrorism, law of war, international law, violent extremism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2292875 Students’ Perception of Vector Representation in the Context of Electric Force and the Role of Simulation in Developing an Understanding
Authors: S. Shubha, B. N. Meera
Abstract:
Physics Education Research (PER) results have shown that students do not achieve the expected level of competency in understanding the concepts of different domains of Physics learning when taught by the traditional teaching methods, the concepts of Electricity and Magnetism (E&M) being one among them. Simulation being one of the valuable instructional tools renders an opportunity to visualize varied experiences with such concepts. Considering the electric force concept which requires extensive use of vector representations, we report here the outcome of the research results pertaining to the student understanding of this concept and the role of simulation in using vector representation. The simulation platform provides a positive impact on the use of vector representation. The first stage of this study involves eliciting and analyzing student responses to questions that probe their understanding of the concept of electrostatic force and this is followed by four stages of student interviews as they use the interactive simulations of electric force in one dimension. Student responses to the questions are recorded in real time using electronic pad. A validation test interview is conducted to evaluate students' understanding of the electric force concept after using interactive simulation. Results indicate lack of procedural knowledge of the vector representation. The study emphasizes the need for the choice of appropriate simulation and mode of induction for learning.
Keywords: Electric Force, Interactive, Representation, Simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2233874 A Bayesian Kernel for the Prediction of Protein- Protein Interactions
Authors: Hany Alashwal, Safaai Deris, Razib M. Othman
Abstract:
Understanding proteins functions is a major goal in the post-genomic era. Proteins usually work in context of other proteins and rarely function alone. Therefore, it is highly relevant to study the interaction partners of a protein in order to understand its function. Machine learning techniques have been widely applied to predict protein-protein interactions. Kernel functions play an important role for a successful machine learning technique. Choosing the appropriate kernel function can lead to a better accuracy in a binary classifier such as the support vector machines. In this paper, we describe a Bayesian kernel for the support vector machine to predict protein-protein interactions. The use of Bayesian kernel can improve the classifier performance by incorporating the probability characteristic of the available experimental protein-protein interactions data that were compiled from different sources. In addition, the probabilistic output from the Bayesian kernel can assist biologists to conduct more research on the highly predicted interactions. The results show that the accuracy of the classifier has been improved using the Bayesian kernel compared to the standard SVM kernels. These results imply that protein-protein interaction can be predicted using Bayesian kernel with better accuracy compared to the standard SVM kernels.Keywords: Bioinformatics, Protein-protein interactions, Bayesian Kernel, Support Vector Machines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2164873 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine
Authors: Hira Lal Gope, Hidekazu Fukai
Abstract:
The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.
Keywords: Convolutional neural networks, coffee bean, peaberry, sorting, support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553872 Creative Teaching of New Product Development to Operations Managers
Authors: Marco Leite, J. M. Vilas-Boas da Silva, Isabel Duarte de Almeida
Abstract:
New Product Development (NPD) has got its roots on an Engineering background. Thus, one might wonder about the interest, opportunity, contents and delivery process, if students from soft sciences were involved. This paper addressed «What to teach?» and «How to do it?», as the preliminary research questions that originated the introduced propositions. The curriculum-developer model that was purposefully chosen to adapt the coursebook by pursuing macro/micro strategies was found significant by an exploratory qualitative case study. Moreover, learning was developed and value created by implementing the institutional curriculum through a creative, hands-on, experiencing, problem-solving, problem-based but organized teamwork approach. Product design of an orange squeezer complying with ill-defined requirements, including drafts, sketches, prototypes, CAD simulations and a business plan, plus a website, written reports and presentations were the deliverables that confirmed an innovative contribution towards research and practice of teaching and learning of engineering subjects to non-specialist operations managers candidates.
Keywords: Teaching Engineering to Non-specialists, Operations Managers Education, Teamwork, Product Design and Development, Market- driven NPD, Curriculum development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2410871 A Formative Assessment Tool for Effective Feedback
Authors: Rami Rashkovits, Ilana Lavy
Abstract:
In this study we present our developed formative assessment tool for students' assignments. The tool enables lecturers to define assignments for the course and assign each problem in each assignment a list of criteria and weights by which the students' work is evaluated. During assessment, the lecturers feed the scores for each criterion with justifications. When the scores of the current assignment are completely fed in, the tool automatically generates reports for both students and lecturers. The students receive a report by email including detailed description of their assessed work, their relative score and their progress across the criteria along the course timeline. This information is presented via charts generated automatically by the tool based on the scores fed in. The lecturers receive a report that includes summative (e.g., averages, standard deviations) and detailed (e.g., histogram) data of the current assignment. This information enables the lecturers to follow the class achievements and adjust the learning process accordingly. The tool was examined on two pilot groups of college students that study a course in (1) Object-Oriented Programming (2) Plane Geometry. Results reveal that most of the students were satisfied with the assessment process and the reports produced by the tool. The lecturers who used the tool were also satisfied with the reports and their contribution to the learning process.
Keywords: Computer-based formative assessment tool, science education.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890870 Association of Sensory Processing and Cognitive Deficits in Children with Autism Spectrum Disorders – Pioneer Study in Saudi Arabia
Authors: Rana M. Zeina, Laila AL-Ayadhi, Shahid Bashir
Abstract:
The association between sensory problems and cognitive abilities has been studied in individuals with Autism Spectrum Disorders (ASDs). In this study, we used a Neuropsychological Test to evaluate memory and attention in ASDs children with sensory problems compared to the ASDs children without sensory problems. Four visual memory tests of Cambridge Neuropsychological Test Automated Battery (CANTAB) including Big/little circle (BLC), Simple Reaction Time (SRT) Intra /Extra dimensional set shift (IED), Spatial recognition memory (SRM), were administered to 14 ASDs children with sensory problems compared to 13 ASDs without sensory problems aged 3 to 12 with IQ of above 70. ASDs individuals with sensory problems performed worse than the ASDs group without sensory problems on comprehension, learning, reversal and simple reaction time tasks, and no significant difference between the two groups was recorded in terms of the visual memory and visual comprehension tasks. The findings of this study suggest that ASDs children with sensory problems are facing deficits in learning, comprehension, reversal, and speed of response to a stimulus.
Keywords: Visual memory, Attention, Autism Spectrum Disorders (ASDs).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2535869 Alignment between Understanding and Assessment Practice among Secondary School Teachers
Authors: Eftah Bte. Moh @ Hj Abdullah, Izazol Binti Idris, Abd Aziz Bin Abd Shukor
Abstract:
This study aimed to identify the alignment of understanding and assessment practices among secondary school teachers. The study was carried out using quantitative descriptive study. The sample consisted of 164 teachers who taught Form 1 and 2 from 11 secondary schools in the district of North Kinta, Perak, Malaysia. Data were obtained from 164 respondents who answered Expectation Alignment Understanding and Practices of School Assessment (PEKDAPS) questionnaire. The data were analysed using SPSS 17.0+. The Cronbach’s alpha value obtained through PEKDAPS questionnaire pilot study was 0.86. The results showed that teachers' performance in PEKDAPS based on the mean value was less than 3, which means that perfect alignment does not occur between the understanding and practices of school assessment. Two major PEKDAPS sub-constructs of articulation across grade and age and usability of the system were higher than the moderate alignment of the understanding and practices of school assessment (Min=2.0). The content focused of PEKDAPs sub-constructs which showed lower than the moderate alignment of the understanding and practices of school assessment (Min=2.0). Another two PEKDAPS subconstructs of transparency and fairness and the pedagogical implications showed moderate alignment (2.0). The implications of the study is that teachers need to fully understand the importance of alignment among components of assessment, learning and teaching and learning objectives as strategies to achieve quality assessment process.
Keywords: Alignment, assessment practices, School Based Assessment, understanding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2002868 Detecting Email Forgery using Random Forests and Naïve Bayes Classifiers
Authors: Emad E Abdallah, A.F. Otoom, ArwaSaqer, Ola Abu-Aisheh, Diana Omari, Ghadeer Salem
Abstract:
As emails communications have no consistent authentication procedure to ensure the authenticity, we present an investigation analysis approach for detecting forged emails based on Random Forests and Naïve Bays classifiers. Instead of investigating the email headers, we use the body content to extract a unique writing style for all the possible suspects. Our approach consists of four main steps: (1) The cybercrime investigator extract different effective features including structural, lexical, linguistic, and syntactic evidence from previous emails for all the possible suspects, (2) The extracted features vectors are normalized to increase the accuracy rate. (3) The normalized features are then used to train the learning engine, (4) upon receiving the anonymous email (M); we apply the feature extraction process to produce a feature vector. Finally, using the machine learning classifiers the email is assigned to one of the suspects- whose writing style closely matches M. Experimental results on real data sets show the improved performance of the proposed method and the ability of identifying the authors with a very limited number of features.Keywords: Digital investigation, cybercrimes, emails forensics, anonymous emails, writing style, and authorship analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5254867 A Comprehensive Survey on Machine Learning Techniques and User Authentication Approaches for Credit Card Fraud Detection
Authors: Niloofar Yousefi, Marie Alaghband, Ivan Garibay
Abstract:
With the increase of credit card usage, the volume of credit card misuse also has significantly increased, which may cause appreciable financial losses for both credit card holders and financial organizations issuing credit cards. As a result, financial organizations are working hard on developing and deploying credit card fraud detection methods, in order to adapt to ever-evolving, increasingly sophisticated defrauding strategies and identifying illicit transactions as quickly as possible to protect themselves and their customers. Compounding on the complex nature of such adverse strategies, credit card fraudulent activities are rare events compared to the number of legitimate transactions. Hence, the challenge to develop fraud detection that are accurate and efficient is substantially intensified and, as a consequence, credit card fraud detection has lately become a very active area of research. In this work, we provide a survey of current techniques most relevant to the problem of credit card fraud detection. We carry out our survey in two main parts. In the first part, we focus on studies utilizing classical machine learning models, which mostly employ traditional transnational features to make fraud predictions. These models typically rely on some static physical characteristics, such as what the user knows (knowledge-based method), or what he/she has access to (object-based method). In the second part of our survey, we review more advanced techniques of user authentication, which use behavioral biometrics to identify an individual based on his/her unique behavior while he/she is interacting with his/her electronic devices. These approaches rely on how people behave (instead of what they do), which cannot be easily forged. By providing an overview of current approaches and the results reported in the literature, this survey aims to drive the future research agenda for the community in order to develop more accurate, reliable and scalable models of credit card fraud detection.
Keywords: credit card fraud detection, user authentication, behavioral biometrics, machine learning, literature survey
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 544866 Lean Manufacturing: Systematic Layout Planning Application to an Assembly Line Layout of a Welding Industry
Authors: Fernando Augusto Ullmann Tobe, Moacyr Amaral Domingues, Figueiredo, Stephany Rie Yamamoto Gushiken
Abstract:
The purpose of this paper is to present the process of elaborating the layout of an assembly line of a welding industry using the principles of lean manufacturing as the main driver. The objective of this paper is relevant since the current layout of the assembly line causes non-productive times for operators, being related to the lean waste of unnecessary movements. The methodology used for the project development was Project-based Learning (PBL), which is an active way of learning focused on real problems. The process of selecting the methodology for layout planning was developed considering three criteria to evaluate the most relevant one for this paper's goal. As a result of this evaluation, Systematic Layout Planning was selected, and three steps were added to it – Value Stream Mapping for the current situation and after layout changed and the definition of lean tools and layout type. This inclusion was to consider lean manufacturing in the layout redesign of the industry. The layout change resulted in an increase in the value-adding time of operations carried out in the sector, reduction in movement times between previous and final assemblies, and in cost savings regarding the man-hour value of the employees, which can be invested in productive hours instead of movement times.
Keywords: Assembly line, layout, lean manufacturing, systematic layout planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 826865 Learning Mandarin Chinese as a Foreign Language in a Bilingual Context: Adult Learners’ Perceptions of the Use of L1 Maltese and L2 English in Mandarin Chinese Lessons in Malta
Authors: Christiana Gauci-Sciberras
Abstract:
The first language (L1) could be used in foreign language teaching and learning as a pedagogical tool to scaffold new knowledge in the target language (TL) upon linguistic knowledge that the learner already has. In a bilingual context, code-switching between the two languages usually occurs in classrooms. One of the reasons for code-switching is because both languages are used for scaffolding new knowledge. This research paper aims to find out why both the L1 (Maltese) and the L2 (English) are used in the classroom of Mandarin Chinese as a foreign language (CFL) in the bilingual context of Malta. This research paper also aims to find out the learners’ perceptions of the use of a bilingual medium of instruction. Two research methods were used to collect qualitative data; semi-structured interviews with adult learners of Mandarin Chinese and lesson observations. These two research methods were used so that the data collected in the interviews would be triangulated with data collected in lesson observations. The L1 (Maltese) is the language of instruction mostly used. The teacher and the learners switch to the L2 (English) or to any other foreign language according to the need at a particular instance during the lesson.
Keywords: Chinese, bilingual, pedagogical purpose of L1 and L2, CFL acquisition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 503864 Multi-Enterprise Tie and Co-Operation Mechanism in Mexican Agro Industry SME's
Authors: Tania Elena González Alvarado, Ma. Antonieta Martín Granados
Abstract:
The aim of this paper is to explain what a multienterprise tie is, what evidence its analysis provides and how does the cooperation mechanism influence the establishment of a multienterprise tie. The study focuses on businesses of smaller dimension, geographically dispersed and whose businessmen are learning to cooperate in an international environment. The empirical evidence obtained at this moment permits to conclude the following: The tie is not long-lasting, it has an end; opportunism is an opportunity to learn; the multi-enterprise tie is a space to learn about the cooperation mechanism; the local tie permits a businessman to alternate between competition and cooperation strategies; the disappearance of a tie is an experience of learning for a businessman, diminishing the possibility of failure in the next tie; the cooperation mechanism tends to eliminate hierarchical relations; the multienterprise tie diminishes the asymmetries and permits SME-s to have a better position when they negotiate with large companies; the multi-enterprise tie impacts positively on the local system. The collection of empirical evidence was done trough the following instruments: direct observation in a business encounter to which the businesses attended in 2003 (202 Mexican agro industry SME-s), a survey applied in 2004 (129), a questionnaire applied in 2005 (86 businesses), field visits to the businesses during the period 2006-2008 and; a survey applied by telephone in 2008 (55 Mexican agro industry SME-s).
Keywords: Cooperation, multi-enterprise tie, links, networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1273863 System Identification with General Dynamic Neural Networks and Network Pruning
Authors: Christian Endisch, Christoph Hackl, Dierk Schröder
Abstract:
This paper presents an exact pruning algorithm with adaptive pruning interval for general dynamic neural networks (GDNN). GDNNs are artificial neural networks with internal dynamics. All layers have feedback connections with time delays to the same and to all other layers. The structure of the plant is unknown, so the identification process is started with a larger network architecture than necessary. During parameter optimization with the Levenberg- Marquardt (LM) algorithm irrelevant weights of the dynamic neural network are deleted in order to find a model for the plant as simple as possible. The weights to be pruned are found by direct evaluation of the training data within a sliding time window. The influence of pruning on the identification system depends on the network architecture at pruning time and the selected weight to be deleted. As the architecture of the model is changed drastically during the identification and pruning process, it is suggested to adapt the pruning interval online. Two system identification examples show the architecture selection ability of the proposed pruning approach.Keywords: System identification, dynamic neural network, recurrentneural network, GDNN, optimization, Levenberg Marquardt, realtime recurrent learning, network pruning, quasi-online learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1937862 Performing Diagnosis in Building with Partially Valid Heterogeneous Tests
Authors: Houda Najeh, Mahendra Pratap Singh, Stéphane Ploix, Antoine Caucheteux, Karim Chabir, Mohamed Naceur Abdelkrim
Abstract:
Building system is highly vulnerable to different kinds of faults and human misbehaviors. Energy efficiency and user comfort are directly targeted due to abnormalities in building operation. The available fault diagnosis tools and methodologies particularly rely on rules or pure model-based approaches. It is assumed that model or rule-based test could be applied to any situation without taking into account actual testing contexts. Contextual tests with validity domain could reduce a lot of the design of detection tests. The main objective of this paper is to consider fault validity when validate the test model considering the non-modeled events such as occupancy, weather conditions, door and window openings and the integration of the knowledge of the expert on the state of the system. The concept of heterogeneous tests is combined with test validity to generate fault diagnoses. A combination of rules, range and model-based tests known as heterogeneous tests are proposed to reduce the modeling complexity. Calculation of logical diagnoses coming from artificial intelligence provides a global explanation consistent with the test result. An application example shows the efficiency of the proposed technique: an office setting at Grenoble Institute of Technology.Keywords: Heterogeneous tests, validity, building system, sensor grids, sensor fault, diagnosis, fault detection and isolation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 652861 Effects of Gamification on Lower Secondary School Students’ Motivation and Engagement
Authors: Goh Yung Hong, Mona Masood
Abstract:
This paper explores the effects of gamification on lower secondary school students’ motivation and engagement in the classroom. Two-group posttest-only experimental design were employed to study the influence of gamification teaching method (GTM) when compared with conventional teaching method (CTM) on 60 lower secondary school students. The Student Engagement Instrument (SEI) and Intrinsic Motivation Inventory (IMI) were used to assess students’ intrinsic motivation and engagement level towards the respective teaching method. Finding indicates that students who completed the GTM lesson were significantly higher in intrinsic motivation to learn than those from the CTM. Although the result were insignificant and only marginal difference in the engagement mean, GTM still show better potential in raising student’s engagement in class when compared with CTM. This finding proves that the GTM is likely to solve the current issue of low motivation to learn and low engagement in class among lower secondary school students in Malaysia. On the other hand, despite being not significant, higher mean indicates that CTM positively contribute to higher peer support for learning and better teacher and student relationship when compared with GTM. As a conclusion, gamification approach is flexible and can be adapted into many learning content to enhance the intrinsic motivation to learn and to some extent, encourage better student engagement in class.
Keywords: Conventional teaching method, Gamification teaching method, Motivation, Engagement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5809860 Ensemble Learning with Decision Tree for Remote Sensing Classification
Authors: Mahesh Pal
Abstract:
In recent years, a number of works proposing the combination of multiple classifiers to produce a single classification have been reported in remote sensing literature. The resulting classifier, referred to as an ensemble classifier, is generally found to be more accurate than any of the individual classifiers making up the ensemble. As accuracy is the primary concern, much of the research in the field of land cover classification is focused on improving classification accuracy. This study compares the performance of four ensemble approaches (boosting, bagging, DECORATE and random subspace) with a univariate decision tree as base classifier. Two training datasets, one without ant noise and other with 20 percent noise was used to judge the performance of different ensemble approaches. Results with noise free data set suggest an improvement of about 4% in classification accuracy with all ensemble approaches in comparison to the results provided by univariate decision tree classifier. Highest classification accuracy of 87.43% was achieved by boosted decision tree. A comparison of results with noisy data set suggests that bagging, DECORATE and random subspace approaches works well with this data whereas the performance of boosted decision tree degrades and a classification accuracy of 79.7% is achieved which is even lower than that is achieved (i.e. 80.02%) by using unboosted decision tree classifier.Keywords: Ensemble learning, decision tree, remote sensingclassification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2584859 Financing Decision and Productivity Growth for the Venture Capital Industry Using High-Order Fuzzy Time Series
Authors: Shang-En Yu
Abstract:
Human society, there are many uncertainties, such as economic growth rate forecast of the financial crisis, many scholars have, since the the Song Chissom two scholars in 1993 the concept of the so-called fuzzy time series (Fuzzy Time Series)different mode to deal with these problems, a previous study, however, usually does not consider the relevant variables selected and fuzzy process based solely on subjective opinions the fuzzy semantic discrete, so can not objectively reflect the characteristics of the data set, in addition to carrying outforecasts are often fuzzy rules as equally important, failed to consider the importance of each fuzzy rule. For these reasons, the variable selection (Factor Selection) through self-organizing map (Self-Organizing Map, SOM) and proposed high-end weighted multivariate fuzzy time series model based on fuzzy neural network (Fuzzy-BPN), and using the the sequential weighted average operator (Ordered Weighted Averaging operator, OWA) weighted prediction. Therefore, in order to verify the proposed method, the Taiwan stock exchange (Taiwan Stock Exchange Corporation) Taiwan Weighted Stock Index (Taiwan Stock Exchange Capitalization Weighted Stock Index, TAIEX) as experimental forecast target, in order to filter the appropriate variables in the experiment Finally, included in other studies in recent years mode in conjunction with this study, the results showed that the predictive ability of this study further improve.
Keywords: Heterogeneity, residential mortgage loans, foreclosure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1388858 An Anomaly Detection Approach to Detect Unexpected Faults in Recordings from Test Drives
Authors: Andreas Theissler, Ian Dear
Abstract:
In the automotive industry test drives are being conducted during the development of new vehicle models or as a part of quality assurance of series-production vehicles. The communication on the in-vehicle network, data from external sensors, or internal data from the electronic control units is recorded by automotive data loggers during the test drives. The recordings are used for fault analysis. Since the resulting data volume is tremendous, manually analysing each recording in great detail is not feasible. This paper proposes to use machine learning to support domainexperts by preventing them from contemplating irrelevant data and rather pointing them to the relevant parts in the recordings. The underlying idea is to learn the normal behaviour from available recordings, i.e. a training set, and then to autonomously detect unexpected deviations and report them as anomalies. The one-class support vector machine “support vector data description” is utilised to calculate distances of feature vectors. SVDDSUBSEQ is proposed as a novel approach, allowing to classify subsequences in multivariate time series data. The approach allows to detect unexpected faults without modelling effort as is shown with experimental results on recordings from test drives.
Keywords: Anomaly detection, fault detection, test drive analysis, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2477857 The Ethio-Eritrea Claims Commission on Use of Force: Issue of Self-Defense or Violation of Sovereignty
Authors: Isaias Teklia Berhe
Abstract:
A decision that deals with international disputes, be it arbitral or judicial, has to properly reflect objectivity and coherence with existing rules of international law. This paper shows the decision of the Ethio-Eritrea Claims Commission on the jus ad bellum case is bereft of objectivity and coherence, which contributed a disservice to international law on many aspects. The Commission’s decision that holds Eritrea in contravention to Art 2(4) of the UN Charter based on Ethiopia’s contention is flawed. It fails to consider: the illegitimacy of an actual authority established over contested territory through hostile acts, the proper determination of effectivites under international law, the sanctity of colonially determined boundaries, Ethiopia’s prior firm political recognition and undergirds to respect colonial boundary, and Ethio-Eritrea Border Commission’s decision. The paper will also argue that the Commission confused Eritrea’s right of self-defense with the rule against the non-use of force to settle territorial disputes; wherefore its decision sanitizes or sterilizes unlawful change of territory resulted through unlawful use of force to the effect of advantaging aggressions. The paper likewise argues that the decision is so sacrilegious that it disregards the ossified legal finality of colonial boundaries. Moreover, its approach toward armed attack does not reflect the peculiarity of the jus ad bellum case rather it brings about definitional uncertainties and sustains the perception that the law on self-defense is unsettled.Keywords: Armed attack, self-defense, territorial integrity, use of force.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1757856 Complex-Valued Neural Network in Signal Processing: A Study on the Effectiveness of Complex Valued Generalized Mean Neuron Model
Authors: Anupama Pande, Ashok Kumar Thakur, Swapnoneel Roy
Abstract:
A complex valued neural network is a neural network which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in signal processing. In Neural networks, generalized mean neuron model (GMN) is often discussed and studied. The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. This paper aims to present exhaustive results of using Generalized Mean Neuron model in a complex-valued neural network model that uses the back-propagation algorithm (called -Complex-BP-) for learning. Our experiments results demonstrate the effectiveness of a Generalized Mean Neuron Model in a complex plane for signal processing over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error required on a Generalized Mean neural network model. Some inherent properties of this complex back propagation algorithm are also studied and discussed.Keywords: Complex valued neural network, Generalized Meanneuron model, Signal processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730855 Enhanced GA-Fuzzy OPF under both Normal and Contingent Operation States
Authors: Ashish Saini, A.K. Saxena
Abstract:
The genetic algorithm (GA) based solution techniques are found suitable for optimization because of their ability of simultaneous multidimensional search. Many GA-variants have been tried in the past to solve optimal power flow (OPF), one of the nonlinear problems of electric power system. The issues like convergence speed and accuracy of the optimal solution obtained after number of generations using GA techniques and handling system constraints in OPF are subjects of discussion. The results obtained for GA-Fuzzy OPF on various power systems have shown faster convergence and lesser generation costs as compared to other approaches. This paper presents an enhanced GA-Fuzzy OPF (EGAOPF) using penalty factors to handle line flow constraints and load bus voltage limits for both normal network and contingency case with congestion. In addition to crossover and mutation rate adaptation scheme that adapts crossover and mutation probabilities for each generation based on fitness values of previous generations, a block swap operator is also incorporated in proposed EGA-OPF. The line flow limits and load bus voltage magnitude limits are handled by incorporating line overflow and load voltage penalty factors respectively in each chromosome fitness function. The effects of different penalty factors settings are also analyzed under contingent state.Keywords: Contingent operation state, Fuzzy rule base, Genetic Algorithms, Optimal Power Flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1615854 Learning to Recognize Faces by Local Feature Design and Selection
Authors: Yanwei Pang, Lei Zhang, Zhengkai Liu
Abstract:
Studies in neuroscience suggest that both global and local feature information are crucial for perception and recognition of faces. It is widely believed that local feature is less sensitive to variations caused by illumination, expression and illumination. In this paper, we target at designing and learning local features for face recognition. We designed three types of local features. They are semi-global feature, local patch feature and tangent shape feature. The designing of semi-global feature aims at taking advantage of global-like feature and meanwhile avoiding suppressing AdaBoost algorithm in boosting weak classifies established from small local patches. The designing of local patch feature targets at automatically selecting discriminative features, and is thus different with traditional ways, in which local patches are usually selected manually to cover the salient facial components. Also, shape feature is considered in this paper for frontal view face recognition. These features are selected and combined under the framework of boosting algorithm and cascade structure. The experimental results demonstrate that the proposed approach outperforms the standard eigenface method and Bayesian method. Moreover, the selected local features and observations in the experiments are enlightening to researches in local feature design in face recognition.Keywords: Face recognition, local feature, AdaBoost, subspace analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1597853 Instant Location Detection of Objects Moving at High-Speedin C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data of the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as «signaling parameters» (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of COTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources, but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as rule. This report contains describing of the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.
Keywords: C-OTDR-system, co-processing of signaling parameters, high-speed objects localization, multichannel monitoring systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1908