Search results for: knowledge complexity
1323 Modeling and Analysis for Effective Capacity of a Cross-Layer Optimized Wireless Networks
Authors: Reham A. El-mayet, Hesham M. El-Badawy, Salwa H. Elramly
Abstract:
New generation mobile communication networks have the ability of supporting triple play. In order that, Orthogonal Frequency Division Multiplexing (OFDM) access techniques have been chosen to enlarge the system ability for high data rates networks. Many of cross-layer modeling and optimization schemes for Quality of Service (QoS) and capacity of downlink multiuser OFDM system were proposed. In this paper, the Maximum Weighted Capacity (MWC) based resource allocation at the Physical (PHY) layer is used. This resource allocation scheme provides a much better QoS than the previous resource allocation schemes, while maintaining the highest or nearly highest capacity and costing similar complexity. In addition, the Delay Satisfaction (DS) scheduling at the Medium Access Control (MAC) layer, which allows more than one connection to be served in each slot is used. This scheduling technique is more efficient than conventional scheduling to investigate both of the number of users as well as the number of subcarriers against system capacity. The system will be optimized for different operational environments: the outdoor deployment scenarios as well as the indoor deployment scenarios are investigated and also for different channel models. In addition, effective capacity approach [1] is used not only for providing QoS for different mobile users, but also to increase the total wireless network's throughput.Keywords: Cross-layer, effective capacity, LTE, OFDM, QoS, resource allocation, wireless networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17961322 Combating Money Laundering in the Banking Industry: Malaysian Experience
Authors: Aspalella A. Rahman
Abstract:
Money laundering has been described by many as the lifeblood of crime and is a major threat to the economic and social well-being of societies. It has been recognized that the banking system has long been the central element of money laundering. This is in part due to the complexity and confidentiality of the banking system itself. It is generally accepted that effective anti-money laundering (AML) measures adopted by banks will make it tougher for criminals to get their "dirty money" into the financial system. In fact, for law enforcement agencies, banks are considered to be an important source of valuable information for the detection of money laundering. However, from the banks- perspective, the main reason for their existence is to make as much profits as possible. Hence their cultural and commercial interests are totally distinct from that of the law enforcement authorities. Undoubtedly, AML laws create a major dilemma for banks as they produce a significant shift in the way banks interact with their customers. Furthermore, the implementation of the laws not only creates significant compliance problems for banks, but also has the potential to adversely affect the operations of banks. As such, it is legitimate to ask whether these laws are effective in preventing money launderers from using banks, or whether they simply put an unreasonable burden on banks and their customers. This paper attempts to address these issues and analyze them against the background of the Malaysian AML laws. It must be said that effective coordination between AML regulator and the banking industry is vital to minimize problems faced by the banks and thereby to ensure effective implementation of the laws in combating money laundering.
Keywords: Banking Industry, Bank Negara Money, Laundering, Malaysia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42871321 Can Exams Be Shortened? Using a New Empirical Approach to Test in Finance Courses
Authors: Eric S. Lee, Connie Bygrave, Jordan Mahar, Naina Garg, Suzanne Cottreau
Abstract:
Marking exams is universally detested by lecturers. Final exams in many higher education courses often last 3.0 hrs. Do exams really need to be so long? Can we justifiably reduce the number of questions on them? Surprisingly few have researched these questions, arguably because of the complexity and difficulty of using traditional methods. To answer these questions empirically, we used a new approach based on three key elements: Use of an unusual variation of a true experimental design, equivalence hypothesis testing, and an expanded set of six psychometric criteria to be met by any shortened exam if it is to replace a current 3.0-hr exam (reliability, validity, justifiability, number of exam questions, correspondence, and equivalence). We compared student performance on each official 3.0-hr exam with that on five shortened exams having proportionately fewer questions (2.5, 2.0, 1.5, 1.0, and 0.5 hours) in a series of four experiments conducted in two classes in each of two finance courses (224 students in total). We found strong evidence that, in these courses, shortening of final exams to 2.0 hrs was warranted on all six psychometric criteria. Shortening these exams by one hour should result in a substantial one-third reduction in lecturer time and effort spent marking, lower student stress, and more time for students to prepare for other exams. Our approach provides a relatively simple, easy-to-use methodology that lecturers can use to examine the effect of shortening their own exams.
Keywords: Exam length, psychometric criteria, synthetic experimental designs, test length.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15031320 Broadband PowerLine Communications: Performance Analysis
Authors: Justinian Anatory, Nelson Theethayi, M. M. Kissaka, N. H. Mvungi
Abstract:
Power line channel is proposed as an alternative for broadband data transmission especially in developing countries like Tanzania [1]. However the channel is affected by stochastic attenuation and deep notches which can lead to the limitation of channel capacity and achievable data rate. Various studies have characterized the channel without giving exactly the maximum performance and limitation in data transfer rate may be this is due to complexity of channel modeling being used. In this paper the channel performance of medium voltage, low voltage and indoor power line channel is presented. In the investigations orthogonal frequency division multiplexing (OFDM) with phase shift keying (PSK) as carrier modulation schemes is considered, for indoor, medium and low voltage channels with typical ten branches and also Golay coding is applied for medium voltage channel. From channels, frequency response deep notches are observed in various frequencies which can lead to reduce the achievable data rate. However, is observed that data rate up to 240Mbps is realized for a signal to noise ratio of about 50dB for indoor and low voltage channels, however for medium voltage a typical link with ten branches is affected by strong multipath and coding is required for feasible broadband data transfer.
Keywords: Powerline Communications, branched network, channel model, modulation, channel performance, OFDM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18331319 Disparity of Learning Styles and Cognitive Abilities in Vocational Education
Authors: Mimi Mohaffyza Mohamad, Yee Mei Heong, Nurfirdawati Muhammad Hanafi Tee Tze Kiong
Abstract:
This study is conducted to investigate the disparity of between learning styles and cognitive abilities specifically in Vocational Education. Felder and Silverman Learning Styles Model (FSLSM) was applied to measure the students’ learning styles while the content in Building Construction Subject consists; knowledge, skills and problem solving were taken into account in constructing the elements of cognitive abilities. Building Construction is one of the vocational courses offered in Vocational Education structure. There are four dimension of learning styles proposed by Felder and Silverman intended to capture student learning preferences with regards to processing either active or reflective, perception based on sensing or intuitive, input of information used visual or verbal and understanding information represent with sequential or global learner. Felder-Solomon Learning Styles Index was developed based on FSLSM and the questions were used to identify what type of student learning preferences. The index consists 44 item-questions characterize for learning styles dimension in FSLSM. The achievement test was developed to determine the students’ cognitive abilities. The quantitative data was analyzed in descriptive and inferential statistic involving Multivariate Analysis of Variance (MANOVA). The study discovered students are tending to be visual learners and each type of learner having significant difference whereas cognitive abilities there are different finding for each type of learners in knowledge, skills and problem solving. This study concludes the gap between type of learner and the cognitive abilities in few illustrations and it explained how the connecting made. The finding may help teachers to facilitate students more effectively and to boost the student’s cognitive abilities.
Keywords: Learning Styles, Cognitive Abilities, Dimension of Learning Styles, Learning Preferences.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26361318 Compressed Sensing of Fetal Electrocardiogram Signals Based on Joint Block Multi-Orthogonal Least Squares Algorithm
Authors: Xiang Jianhong, Wang Cong, Wang Linyu
Abstract:
With the rise of medical IoT technologies, Wireless body area networks (WBANs) can collect fetal electrocardiogram (FECG) signals to support telemedicine analysis. The compressed sensing (CS)-based WBANs system can avoid the sampling of a large amount of redundant information and reduce the complexity and computing time of data processing, but the existing algorithms have poor signal compression and reconstruction performance. In this paper, a Joint block multi-orthogonal least squares (JBMOLS) algorithm is proposed. We apply the FECG signal to the Joint block sparse model (JBSM), and a comparative study of sparse transformation and measurement matrices is carried out. A FECG signal compression transmission mode based on Rbio5.5 wavelet, Bernoulli measurement matrix, and JBMOLS algorithm is proposed to improve the compression and reconstruction performance of FECG signal by CS-based WBANs. Experimental results show that the compression ratio (CR) required for accurate reconstruction of this transmission mode is increased by nearly 10%, and the runtime is saved by about 30%.
Keywords: telemedicine, fetal electrocardiogram, compressed sensing, joint sparse reconstruction, block sparse signal
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5101317 Solving Part Type Selection and Loading Problem in Flexible Manufacturing System Using Real Coded Genetic Algorithms – Part II: Optimization
Authors: Wayan F. Mahmudy, Romeo M. Marian, Lee H. S. Luong
Abstract:
This paper presents modeling and optimization of two NP-hard problems in flexible manufacturing system (FMS), part type selection problem and loading problem. Due to the complexity and extent of the problems, the paper was split into two parts. The first part of the papers has discussed the modeling of the problems and showed how the real coded genetic algorithms (RCGA) can be applied to solve the problems. This second part discusses the effectiveness of the RCGA which uses an array of real numbers as chromosome representation. The novel proposed chromosome representation produces only feasible solutions which minimize a computational time needed by GA to push its population toward feasible search space or repair infeasible chromosomes. The proposed RCGA improves the FMS performance by considering two objectives, maximizing system throughput and maintaining the balance of the system (minimizing system unbalance). The resulted objective values are compared to the optimum values produced by branch-and-bound method. The experiments show that the proposed RCGA could reach near optimum solutions in a reasonable amount of time.
Keywords: Flexible manufacturing system, production planning, part type selection problem, loading problem, real-coded genetic algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19701316 Barriers and Conflicts in Relationships of Small Firms – Insights from Central Europe
Authors: Maciej Mitręga
Abstract:
This paper contributes to our knowledge about buyerseller relations by identifying barriers and conflict situations associated with maintaining and developing durable business relationships by small companies. The contribution of prior studies with regard to negative aspects of marketing relationships is presented in the first section. The international research results are discussed with regard to the existing conceptualizations and main research implications identified at the end.Keywords: Relationship marketing, barriers, conflict, SME, international research.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13151315 Text Mining Technique for Data Mining Application
Authors: M. Govindarajan
Abstract:
Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In decision tree approach is most useful in classification problem. With this technique, tree is constructed to model the classification process. There are two basic steps in the technique: building the tree and applying the tree to the database. This paper describes a proposed C5.0 classifier that performs rulesets, cross validation and boosting for original C5.0 in order to reduce the optimization of error ratio. The feasibility and the benefits of the proposed approach are demonstrated by means of medial data set like hypothyroid. It is shown that, the performance of a classifier on the training cases from which it was constructed gives a poor estimate by sampling or using a separate test file, either way, the classifier is evaluated on cases that were not used to build and evaluate the classifier are both are large. If the cases in hypothyroid.data and hypothyroid.test were to be shuffled and divided into a new 2772 case training set and a 1000 case test set, C5.0 might construct a different classifier with a lower or higher error rate on the test cases. An important feature of see5 is its ability to classifiers called rulesets. The ruleset has an error rate 0.5 % on the test cases. The standard errors of the means provide an estimate of the variability of results. One way to get a more reliable estimate of predictive is by f-fold –cross- validation. The error rate of a classifier produced from all the cases is estimated as the ratio of the total number of errors on the hold-out cases to the total number of cases. The Boost option with x trials instructs See5 to construct up to x classifiers in this manner. Trials over numerous datasets, large and small, show that on average 10-classifier boosting reduces the error rate for test cases by about 25%.Keywords: C5.0, Error Ratio, text mining, training data, test data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24891314 Isolation and Classification of Red Blood Cells in Anemic Microscopic Images
Authors: Jameela Ali Alkrimi, Loay E. George, Azizah Suliman, Abdul Rahim Ahmad, Karim Al-Jashamy
Abstract:
Red blood cells (RBCs) are among the most commonly and intensively studied type of blood cells in cell biology. Anemia is a lack of RBCs is characterized by its level compared to the normal hemoglobin level. In this study, a system based image processing methodology was developed to localize and extract RBCs from microscopic images. Also, the machine learning approach is adopted to classify the localized anemic RBCs images. Several textural and geometrical features are calculated for each extracted RBCs. The training set of features was analyzed using principal component analysis (PCA). With the proposed method, RBCs were isolated in 4.3secondsfrom an image containing 18 to 27 cells. The reasons behind using PCA are its low computation complexity and suitability to find the most discriminating features which can lead to accurate classification decisions. Our classifier algorithm yielded accuracy rates of 100%, 99.99%, and 96.50% for K-nearest neighbor (K-NN) algorithm, support vector machine (SVM), and neural network RBFNN, respectively. Classification was evaluated in highly sensitivity, specificity, and kappa statistical parameters. In conclusion, the classification results were obtained within short time period, and the results became better when PCA was used.
Keywords: Red blood cells, pre-processing image algorithms, classification algorithms, principal component analysis PCA, confusion matrix, kappa statistical parameters, ROC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31991313 A Domain Specific Modeling Language Semantic Model for Artefact Orientation
Authors: Bunakiye R. Japheth, Ogude U. Cyril
Abstract:
Since the process of transforming user requirements to modeling constructs are not very well supported by domain-specific frameworks, it became necessary to integrate domain requirements with the specific architectures to achieve an integrated customizable solutions space via artifact orientation. Domain-specific modeling language specifications of model-driven engineering technologies focus more on requirements within a particular domain, which can be tailored to aid the domain expert in expressing domain concepts effectively. Modeling processes through domain-specific language formalisms are highly volatile due to dependencies on domain concepts or used process models. A capable solution is given by artifact orientation that stresses on the results rather than expressing a strict dependence on complicated platforms for model creation and development. Based on this premise, domain-specific methods for producing artifacts without having to take into account the complexity and variability of platforms for model definitions can be integrated to support customizable development. In this paper, we discuss methods for the integration capabilities and necessities within a common structure and semantics that contribute a metamodel for artifact-orientation, which leads to a reusable software layer with concrete syntax capable of determining design intents from domain expert. These concepts forming the language formalism are established from models explained within the oil and gas pipelines industry.
Keywords: Control process, metrics of engineering, structured abstraction, semantic model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7421312 A Microcontroller Implementation of Constrained Model Predictive Control
Authors: Amira Kheriji Abbes, Faouzi Bouani, Mekki Ksouri
Abstract:
Model Predictive Control (MPC) is an established control technique in a wide range of process industries. The reason for this success is its ability to handle multivariable systems and systems having input, output or state constraints. Neverthless comparing to PID controller, the implementation of the MPC in miniaturized devices like Field Programmable Gate Arrays (FPGA) and microcontrollers has historically been very small scale due to its complexity in implementation and its computation time requirement. At the same time, such embedded technologies have become an enabler for future manufacturing enterprisers as well as a transformer of organizations and markets. In this work, we take advantage of these recent advances in this area in the deployment of one of the most studied and applied control technique in the industrial engineering. In this paper, we propose an efficient firmware for the implementation of constrained MPC in the performed STM32 microcontroller using interior point method. Indeed, performances study shows good execution speed and low computational burden. These results encourage to develop predictive control algorithms to be programmed in industrial standard processes. The PID anti windup controller was also implemented in the STM32 in order to make a performance comparison with the MPC. The main features of the proposed constrained MPC framework are illustrated through two examples.Keywords: Embedded software, microcontroller, constrainedModel Predictive Control, interior point method, PID antiwindup, Keil tool, C/Cµ language.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27981311 Questions Categorization in E-Learning Environment Using Data Mining Technique
Authors: Vilas P. Mahatme, K. K. Bhoyar
Abstract:
Nowadays, education cannot be imagined without digital technologies. It broadens the horizons of teaching learning processes. Several universities are offering online courses. For evaluation purpose, e-examination systems are being widely adopted in academic environments. Multiple-choice tests are extremely popular. Moving away from traditional examinations to e-examination, Moodle as Learning Management Systems (LMS) is being used. Moodle logs every click that students make for attempting and navigational purposes in e-examination. Data mining has been applied in various domains including retail sales, bioinformatics. In recent years, there has been increasing interest in the use of data mining in e-learning environment. It has been applied to discover, extract, and evaluate parameters related to student’s learning performance. The combination of data mining and e-learning is still in its babyhood. Log data generated by the students during online examination can be used to discover knowledge with the help of data mining techniques. In web based applications, number of right and wrong answers of the test result is not sufficient to assess and evaluate the student’s performance. So, assessment techniques must be intelligent enough. If student cannot answer the question asked by the instructor then some easier question can be asked. Otherwise, more difficult question can be post on similar topic. To do so, it is necessary to identify difficulty level of the questions. Proposed work concentrate on the same issue. Data mining techniques in specific clustering is used in this work. This method decide difficulty levels of the question and categories them as tough, easy or moderate and later this will be served to the desire students based on their performance. Proposed experiment categories the question set and also group the students based on their performance in examination. This will help the instructor to guide the students more specifically. In short mined knowledge helps to support, guide, facilitate and enhance learning as a whole.Keywords: Data mining, e-examination, e-learning, moodle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20751310 Motion Prediction and Motion Vector Cost Reduction during Fast Block Motion Estimation in MCTF
Authors: Karunakar A K, Manohara Pai M M
Abstract:
In 3D-wavelet video coding framework temporal filtering is done along the trajectory of motion using Motion Compensated Temporal Filtering (MCTF). Hence computationally efficient motion estimation technique is the need of MCTF. In this paper a predictive technique is proposed in order to reduce the computational complexity of the MCTF framework, by exploiting the high correlation among the frames in a Group Of Picture (GOP). The proposed technique applies coarse and fine searches of any fast block based motion estimation, only to the first pair of frames in a GOP. The generated motion vectors are supplied to the next consecutive frames, even to subsequent temporal levels and only fine search is carried out around those predicted motion vectors. Hence coarse search is skipped for all the motion estimation in a GOP except for the first pair of frames. The technique has been tested for different fast block based motion estimation algorithms over different standard test sequences using MC-EZBC, a state-of-the-art scalable video coder. The simulation result reveals substantial reduction (i.e. 20.75% to 38.24%) in the number of search points during motion estimation, without compromising the quality of the reconstructed video compared to non-predictive techniques. Since the motion vectors of all the pair of frames in a GOP except the first pair will have value ±1 around the motion vectors of the previous pair of frames, the number of bits required for motion vectors is also reduced by 50%.Keywords: Motion Compensated Temporal Filtering, predictivemotion estimation, lifted wavelet transform, motion vector
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16191309 Assessment of Ultra-High Cycle Fatigue Behavior of EN-GJL-250 Cast Iron Using Ultrasonic Fatigue Testing Machine
Authors: Saeedeh Bakhtiari, Johannes Depessemier, Stijn Hertelé, Wim De Waele
Abstract:
High cycle fatigue comprising up to 107 load cycles has been the subject of many studies, and the behavior of many materials was recorded adequately in this regime. However, many applications involve larger numbers of load cycles during the lifetime of machine components. In this ultra-high cycle regime, other failure mechanisms play, and the concept of a fatigue endurance limit (assumed for materials such as steel) is often an oversimplification of reality. When machine component design demands a high geometrical complexity, cast iron grades become interesting candidate materials. Grey cast iron is known for its low cost, high compressive strength, and good damping properties. However, the ultra-high cycle fatigue behavior of cast iron is poorly documented. The current work focuses on the ultra-high cycle fatigue behavior of EN-GJL-250 (GG25) grey cast iron by developing an ultrasonic (20 kHz) fatigue testing system. Moreover, the testing machine is instrumented to measure the temperature and the displacement of the specimen, and to control the temperature. The high resonance frequency allowed to assess the behavior of the cast iron of interest within a matter of days for ultra-high numbers of cycles, and repeat the tests to quantify the natural scatter in fatigue resistance.Keywords: GG25, cast iron, ultra-high cycle fatigue, ultrasonic test.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8031308 Intelligent Process and Model Applied for E-Learning Systems
Authors: Mafawez Alharbi, Mahdi Jemmali
Abstract:
E-learning is a developing area especially in education. E-learning can provide several benefits to learners. An intelligent system to collect all components satisfying user preferences is so important. This research presents an approach that it capable to personalize e-information and give the user their needs following their preferences. This proposal can make some knowledge after more evaluations made by the user. In addition, it can learn from the habit from the user. Finally, we show a walk-through to prove how intelligent process work.
Keywords: Artificial intelligence, architecture, e-learning, software engineering, processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10931307 Mastering the Innovation Paradox: The Five Unexpected Qualities of Innovation Leaders
Authors: Murtuza Ali Lakhani, Michelle Marquard
Abstract:
From an organizational perspective, leaders are a variation of the same talent pool in that they all score a larger than average value on the bell curve that maps leadership behaviors and characteristics, namely competence, vision, communication, confidence, cultural sensibility, stewardship, empowerment, authenticity, reinforcement, and creativity. The question that remains unanswered and essentially unresolved is how to explain the irony that leaders are so much alike yet their organizations diverge so noticeably in their ability to innovate. Leadership intersects with innovation at the point where human interactions get exceedingly complex and where certain paradoxical forces cohabit: conflict with conciliation, sovereignty with interdependence, and imagination with realism. Rather than accepting that leadership is without context, we argue that leaders are specialists of their domain and that those effective at leading for innovation are distinct within the broader pool of leaders. Keeping in view the extensive literature on leadership and innovation, we carried out a quantitative study with data collected over a five-year period involving 240 participants from across five dissimilar companies based in the United States. We found that while innovation and leadership are, in general, strongly interrelated (r = .89, p = 0.0), there are five qualities that set leaders apart on innovation. These qualities include a large radius of trust, a restless curiosity with a low need for acceptance, an honest sense of self and other, a sense for knowledge and creativity as the yin and yang of innovation, and an ability to use multiple senses in the engagement with followers. When these particular behaviors and characteristics are present in leaders, organizations out-innovate their rivals by a margin of 29.3 per cent to gain an unassailable edge in a business environment that is regularly disruptive. A strategic outcome of this study is a psychometric scale named iLeadership, proposed with the underlying evidence, limitations, and potential for leadership and innovation in organizations.c
Keywords: Innovation, leadership, ileadership, stewardship, communication, empowerment, creativity, vision, influence, emotional connection, group membership, sense of community, knowledge creation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26081306 Philosophy of Education: The Challenges of Globalization and Innovation in the Information Society
Authors: Shattyk Aliyev, Zhakypbek Altayev, Zuchra Ismagambetova, Yerkin Massanov
Abstract:
Information society is an absolutely new public formation at which the infrastructure and the social relations correspond to the socialized essence of «information genotype» mankind. Information society is a natural social environment which allows the person to open completely the information nature, to use intelligence for joint creation with other people of new information on the basis of knowledge earlier saved up by previous generations.
Keywords: Information society, Philosophy, Education, Globalization and innovation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19561305 Creative Skills Supported by Multidisciplinary Learning: Case Innovation Course at the Seinäjoki University of Applied Sciences
Authors: Satu Lautamäki
Abstract:
This paper presents findings from a multidisciplinary course (bachelor level) implemented at Seinäjoki University of Applied Sciences, Finland. The course aims to develop innovative thinking of students, by having projects given by companies, using design thinking methods as a tool for creativity and by integrating students into multidisciplinary teams working on the given projects. The course is obligatory for all first year bachelor students across four faculties (business and culture, food and agriculture, health care and social work, and technology). The course involves around 800 students and 30 pedagogical coaches, and it is implemented as an intensive one-week course each year. The paper discusses the pedagogy, structure and coordination of the course. Also, reflections on methods for the development of creative skills are given. Experts in contemporary, global context often work in teams, which consist of people who have different areas of expertise and represent various professional backgrounds. That is why there is a strong need for new training methods where multidisciplinary approach is at the heart of learning. Creative learning takes place when different parties bring information to the discussion and learn from each other. When students in different fields are looking for professional growth for themselves and take responsibility for the professional growth of other learners, they form a mutual learning relationship with each other. Multidisciplinary team members make decisions both individually and collectively, which helps them to understand and appreciate other disciplines. Our results show that creative and multidisciplinary project learning can develop diversity of knowledge and competences, for instance, students’ cultural knowledge, teamwork and innovation competences, time management and presentation skills as well as support a student’s personal development as an expert. It is highly recommended that higher education curricula should include various studies for students from different study fields to work in multidisciplinary teams.
Keywords: Multidisciplinary learning, creative skills, innovative thinking, project-based learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5231304 Pronominal Anaphora Processing
Authors: Anna Maria Di Sciullo
Abstract:
Discourse pronominal anaphora resolution must be part of any efficient information processing systems, since the reference of a pronoun is dependent on an antecedent located in the discourse. Contrary to knowledge-poor approaches, this paper shows that syntax-semantic relations are basic in pronominal anaphora resolution. The identification of quantified expressions to which pronouns can be anaphorically related provides further evidence that pronominal anaphora is based on domains of interpretation where asymmetric agreement holds.
Keywords: asymmetric agreement, pronominal anaphora, quantifiers and indefinite expressions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14931303 Two Dimensionnal Model for Extraction Packed Column Simulation using Finite Element Method
Authors: N. Outili, A-H. Meniai
Abstract:
Modeling transfer phenomena in several chemical engineering operations leads to the resolution of partial differential equations systems. According to the complexity of the operations mechanisms, the equations present a nonlinear form and analytical solution became difficult, we have then to use numerical methods which are based on approximations in order to transform a differential system to an algebraic one.Finite element method is one of numerical methods which can be used to obtain an accurate solution in many complex cases of chemical engineering.The packed columns find a large application like contactor for liquid-liquid systems such solvent extraction. In the literature, the modeling of this type of equipment received less attention in comparison with the plate columns.A mathematical bidimensionnal model with radial and axial dispersion, simulating packed tower extraction behavior was developed and a partial differential equation was solved using the finite element method by adopting the Galerkine model. We developed a Mathcad program, which can be used for a similar equations and concentration profiles are obtained along the column. The influence of radial dispersion was prooved and it can-t be neglected, the results were compared with experimental concentration at the top of the column in the extraction system: acetone/toluene/water.Keywords: finite element method, Galerkine method, liquidliquid extraction modelling, packed column simulation, two dimensional model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16901302 Evaluation of the Role of Advocacy and the Quality of Care in Reducing Health Inequalities for People with Autism, Intellectual and Developmental Disabilities at Sheffield Teaching Hospitals
Authors: Jonathan Sahu, Jill Aylott
Abstract:
Individuals with Autism, Intellectual and Developmental disabilities (AIDD) are one of the most vulnerable groups in society, hampered not only by their own limitations to understand and interact with the wider society, but also societal limitations in perception and understanding. Communication to express their needs and wishes is fundamental to enable such individuals to live and prosper in society. This research project was designed as an organisational case study, in a large secondary health care hospital within the National Health Service (NHS), to assess the quality of care provided to people with AIDD and to review the role of advocacy to reduce health inequalities in these individuals. Methods: The research methodology adopted was as an “insider researcher”. Data collection included both quantitative and qualitative data i.e. a mixed method approach. A semi-structured interview schedule was designed and used to obtain qualitative and quantitative primary data from a wide range of interdisciplinary frontline health care workers to assess their understanding and awareness of systems, processes and evidence based practice to offer a quality service to people with AIDD. Secondary data were obtained from sources within the organisation, in keeping with “Case Study” as a primary method, and organisational performance data were then compared against national benchmarking standards. Further data sources were accessed to help evaluate the effectiveness of different types of advocacy that were present in the organisation. This was gauged by measures of user and carer experience in the form of retrospective survey analysis, incidents and complaints. Results: Secondary data demonstrate near compliance of the Organisation with the current national benchmarking standard (Monitor Compliance Framework). However, primary data demonstrate poor knowledge of the Mental Capacity Act 2005, poor knowledge of organisational systems, processes and evidence based practice applied for people with AIDD. In addition there was poor knowledge and awareness of frontline health care workers of advocacy and advocacy schemes for this group. Conclusions: A significant amount of work needs to be undertaken to improve the quality of care delivered to individuals with AIDD. An operational strategy promoting the widespread dissemination of information may not be the best approach to deliver quality care and optimal patient experience and patient advocacy. In addition, a more robust set of standards, with appropriate metrics, needs to be developed to assess organisational performance which will stand the test of professional and public scrutiny.Keywords: Autism, intellectual developmental disabilities, advocacy, health inequalities, quality of care.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8921301 Choosing an Ontology Language
Authors: Anna V. Zhdanova, Uwe Keller
Abstract:
We summarize information that facilitates choosing an ontology language for knowledge intensive applications. This paper is a short version of the ontology language state-of-the-art and evolution analysis carried out for choosing an ontology language in the IST Esperonto project. At first, we analyze changes and evolution that took place in the filed of Semantic Web languages during the last years, in particular, around the ontology languages of the RDF/S and OWL family. Second, we present current trends in development of Semantic Web languages, in particular, rule support extensions for Semantic Web languages and emerging ontology languages such as WSMO languages.Keywords: OWL, RDF/S, Semantic Web Languages, WSML
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17531300 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore
Authors: Ronal Muresano, Andrea Pagano
Abstract:
Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.
Keywords: Algorithm optimization, Bank Failures, OpenMP, Parallel Techniques, Statistical tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19001299 Sufficiency Economy: A Contribution to Economic Development
Authors: Prasopchoke Mongsawad
Abstract:
The Philosophy of Sufficiency Economy, bestowed by His Majesty the King Bhumibol Adulyadej to the people of Thailand, highlights a balanced way of living. Three principles of moderation reasonableness, and immunity, along with the conditions for morality and knowledge, can be applied to any level of the society–from an individual to the nation. The Philosophy of Sufficiency Economy helps address the current development challenges, which are issues on institutions, environmental sustainability, human well-being, and the role of the government.Keywords: Sufficiency Economy, Development Theory, Sustainable Development, Environmental Sustainability, Social Capital, Human Well-Being,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25881298 Basic Business-Forces behind the Surviving and Sustainable Organizations: The Case of Medium Scale Contractors in South Africa
Authors: Iruka C. Anugwo, Winston M. Shakantu
Abstract:
The objective of this study is to uncover the basic business-forces that necessitated the survival and sustainable performance of the medium scale contractors in the South African construction market. This study is essential as it set to contribute towards long-term strategic solutions for combating the incessant failure of start-ups construction organizations within South African. The study used a qualitative research methodology; as the most appropriate approach to elicit and understand, and uncover the phenomena that are basic business-forces for the active contractors in the market. The study also adopted a phenomenological study approach; and in-depth interviews were conducted with 20 medium scale contractors in Port Elizabeth, South Africa, between months of August to October 2015. This allowed for an in-depth understanding of the critical and basic business-forces that influenced their survival and performance beyond the first five years of business operation. Findings of the study showed that for potential contractors (startups), to survival in the competitive business environment such as construction industry, they must possess the basic business-forces. These forces are educational knowledge in construction and business management related disciplines, adequate industrial experiences, competencies and capabilities to delivery excellent services and products as well as embracing the spirit of entrepreneurship. Convincingly, it can be concluded that the strategic approach to minimize the endless failure of startups construction businesses; the potential construction contractors must endeavoring to access and acquire the basic educationally knowledge, training and qualification; need to acquire industrial experiences in collaboration with required competencies, capabilities and entrepreneurship acumen. Without these basic business-forces as been discovered in this study, the majority of the contractors gaining entrance in the market will find it difficult to develop and grow a competitive and sustainable construction organization in South Africa.Keywords: Basic business-forces, medium scale contractors, South Africa, sustainable organisations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15511297 Nutritional Potential and Traditional Uses of High Altitude Wild Edible Plants in Eastern Himalayas, India
Authors: Hui Tag, Jambey Tsering, Pallabi Kalita Hui, Baikuntha Jyoti Gogoi, Vijay Veer
Abstract:
The food security issues and its relevance in High Mountain regions of the world have been often neglected. Wild edible plants have been playing a major role in livelihood security among the tribal Communities of East Himalayan Region of the world since time immemorial. The Eastern Himalayan Region of India is one of the mega diverse regions of world and rated as top 12th Global Biodiversity Hotspots by IUCN and recognized as one of the 200 significant eco-regions of the Globe. The region supports one of the world’s richest alpine floras and about one-third of them are endemic to the region. There are at least 7,500 flowering plants, 700 orchids, 58 bamboo species, 64 citrus species, 28 conifers, 500 mosses, 700 ferns and 728 lichens. The region is the home of more than three hundred different ethnic communities having diverse knowledge on traditional uses of flora and fauna as food, medicine and beverages. Monpa, Memba and Khamba are among the local communities residing in high altitude region of Eastern Himalaya with rich traditional knowledge related to utilization of wild edible plants. The Monpas, Memba and Khamba are the followers Mahayana sect of Himalayan Buddhism and they are mostly agrarian by primary occupation and also heavily relaying on wild edible plants for their livelihood security during famine since millennia. In the present study, we have reported traditional uses of 40 wild edible plant species and out of which 6 species were analyzed at biochemical level for nutrients contents and free radical scavenging activities. The results have shown significant free radical scavenging (antioxidant) activity and nutritional potential of the selected 6 wild edible plants used by the local communities of Eastern Himalayan Region of India.
Keywords: East Himalaya, Local community, Wild edible plants, Nutrition, Food security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40651296 Least Square-SVM Detector for Wireless BPSK in Multi-Environmental Noise
Authors: J. P. Dubois, Omar M. Abdul-Latif
Abstract:
Support Vector Machine (SVM) is a statistical learning tool developed to a more complex concept of structural risk minimization (SRM). In this paper, SVM is applied to signal detection in communication systems in the presence of channel noise in various environments in the form of Rayleigh fading, additive white Gaussian background noise (AWGN), and interference noise generalized as additive color Gaussian noise (ACGN). The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these advanced stochastic noise models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to conventional binary signaling optimal model-based detector driven by binary phase shift keying (BPSK) modulation. We show that the SVM performance is superior to that of conventional matched filter-, innovation filter-, and Wiener filter-driven detectors, even in the presence of random Doppler carrier deviation, especially for low SNR (signal-to-noise ratio) ranges. For large SNR, the performance of the SVM was similar to that of the classical detectors. However, the convergence between SVM and maximum likelihood detection occurred at a higher SNR as the noise environment became more hostile.Keywords: Colour noise, Doppler shift, innovation filter, least square-support vector machine, matched filter, Rayleigh fading, Wiener filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18131295 An Algorithm Proposed for FIR Filter Coefficients Representation
Authors: Mohamed Al Mahdi Eshtawie, Masuri Bin Othman
Abstract:
Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.
Keywords: Pulse shaping Filter, Distributed Arithmetic, Optimization algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31741294 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model
Authors: Nicolae Bold, Daniel Nijloveanu
Abstract:
The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.Keywords: Genetic algorithm, chromosomes, genes, cropping, agriculture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1602