Search results for: multi-echelon inventory system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17794

Search results for: multi-echelon inventory system

9604 Study of the Influence of Refractory Nitride Additives on Hydrogen Storage Properties of Ti6Al4V-Based Materials Produced by Spark Plasma Sintering

Authors: John Olorunfemi Abe, Olawale Muhammed Popoola, Abimbola Patricia Idowu Popoola

Abstract:

Hydrogen is an appealing alternative to fossil fuels because of its abundance, low weight, high energy density, and relative lack of contaminants. However, its low density presents a number of storage challenges. Therefore, this work studies the influence of refractory nitride additives consisting of 5 wt. % each of hexagonal boron nitride (h-BN), titanium nitride (TiN), and aluminum nitride (AlN) on hydrogen storage and electrochemical characteristics of Ti6Al4V-based materials produced by spark plasma sintering. The microstructure and phase constituents of the sintered materials were characterized using scanning electron microscopy (in conjunction with energy-dispersive spectroscopy) and X-ray diffraction, respectively. Pressure-composition-temperature (PCT) measurements were used to assess the hydrogen absorption/desorption behavior, kinetics, and storage capacities of the sintered materials, respectively. The pure Ti6Al4V alloy displayed a two-phase (α+β) microstructure, while the modified composites exhibited apparent microstructural modifications with the appearance of nitride-rich secondary phases. It is found that the diffusion process controls the kinetics of the hydrogen absorption. Thus, a faster rate of hydrogen absorption at elevated temperatures ensued. The additives acted as catalysts, lowered the activation energy and accelerated the rate of hydrogen sorption in the composites relative to the monolithic alloy. Ti6Al4V-5 wt. % h-BN appears to be the most promising candidate for hydrogen storage (2.28 wt. %), followed by Ti6Al4V-5 wt. % TiN (2.09 wt. %), whereas Ti6Al4V-5 wt. % AlN shows the least hydrogen storage performance (1.35 wt. %). Accordingly, the developed hydride system (Ti6Al4V-5h-BN) may be competitive for use in applications involving short-range continuous vehicles (~50-100km) as well as stationary applications such as electrochemical devices, large-scale storage cylinders in hydrogen production locations, and hydrogen filling stations.

Keywords: hydrogen storage, Ti6Al4V hydride system, pressure-composition-temperature measurements, refractory nitride additives, spark plasma sintering, Ti6Al4V-based materials

Procedia PDF Downloads 43
9603 Modified Fractional Curl Operator

Authors: Rawhy Ismail

Abstract:

Applying fractional calculus in the field of electromagnetics shows significant results. The fractionalization of the conventional curl operator leads to having additional solutions to an electromagnetic problem. This work restudies the concept of the fractional curl operator considering fractional time derivatives in Maxwell’s curl equations. In that sense, a general scheme for the wave loss term is introduced and the degree of freedom of the system is affected through imposing the new fractional parameters. The conventional case is recovered by setting all fractional derivatives to unity.

Keywords: curl operator, fractional calculus, fractional curl operators, Maxwell equations

Procedia PDF Downloads 461
9602 Improving the Gain of a Multiband Antenna by Adding an Artificial Magnetic Conductor Metasurface

Authors: Amira Bousselmi

Abstract:

This article presents a PIFA antenna designed for geolocation applications (GNSS) operating on 1.278 GHz, 2.8 GHz, 5.7 GHz and 10 GHz. To improve the performance of the antenna, an artificial magnetic conductor structure (AMC) was used. Adding the antenna with AMC resulted in a measured gain of 4.78 dBi. The results of simulations and measurements are presented. CST Microwave Studio is used to design and compare antenna performance. An antenna design methodology, design and characterization of the AMC surface are described as well as the simulated and measured performances of the AMC antenna are then discussed. Finally, in Section V, there is a conclusion.

Keywords: antenna multiband, global navigation system, AMC, Galeleo

Procedia PDF Downloads 57
9601 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 123
9600 Multi-Agent System Based Solution for Operating Agile and Customizable Micro Manufacturing Systems

Authors: Dylan Santos De Pinho, Arnaud Gay De Combes, Matthieu Steuhlet, Claude Jeannerat, Nabil Ouerhani

Abstract:

The Industry 4.0 initiative has been launched to address huge challenges related to ever-smaller batch sizes. The end-user need for highly customized products requires highly adaptive production systems in order to keep the same efficiency of shop floors. Most of the classical Software solutions that operate the manufacturing processes in a shop floor are based on rigid Manufacturing Execution Systems (MES), which are not capable to adapt the production order on the fly depending on changing demands and or conditions. In this paper, we present a highly modular and flexible solution to orchestrate a set of production systems composed of a micro-milling machine-tool, a polishing station, a cleaning station, a part inspection station, and a rough material store. The different stations are installed according to a novel matrix configuration of a 3x3 vertical shelf. The different cells of the shelf are connected through horizontal and vertical rails on which a set of shuttles circulate to transport the machined parts from a station to another. Our software solution for orchestrating the tasks of each station is based on a Multi-Agent System. Each station and each shuttle is operated by an autonomous agent. All agents communicate with a central agent that holds all the information about the manufacturing order. The core innovation of this paper lies in the path planning of the different shuttles with two major objectives: 1) reduce the waiting time of stations and thus reduce the cycle time of the entire part, and 2) reduce the disturbances like vibration generated by the shuttles, which highly impacts the manufacturing process and thus the quality of the final part. Simulation results show that the cycle time of the parts is reduced by up to 50% compared with MES operated linear production lines while the disturbance is systematically avoided for the critical stations like the milling machine-tool.

Keywords: multi-agent systems, micro-manufacturing, flexible manufacturing, transfer systems

Procedia PDF Downloads 117
9599 Numerical and Experimental Comparison of Surface Pressures around a Scaled Ship Wind-Assisted Propulsion System

Authors: James Cairns, Marco Vezza, Richard Green, Donald MacVicar

Abstract:

Significant legislative changes are set to revolutionise the commercial shipping industry. Upcoming emissions restrictions will force operators to look at technologies that can improve the efficiency of their vessels -reducing fuel consumption and emissions. A device which may help in this challenge is the Ship Wind-Assisted Propulsion system (SWAP), an actively controlled aerofoil mounted vertically on the deck of a ship. The device functions in a similar manner to a sail on a yacht, whereby the aerodynamic forces generated by the sail reach an equilibrium with the hydrodynamic forces on the hull and a forward velocity results. Numerical and experimental testing of the SWAP device is presented in this study. Circulation control takes the form of a co-flow jet aerofoil, utilising both blowing from the leading edge and suction from the trailing edge. A jet at the leading edge uses the Coanda effect to energise the boundary layer in order to delay flow separation and create high lift with low drag. The SWAP concept has been originated by the research and development team at SMAR Azure Ltd. The device will be retrofitted to existing ships so that a component of the aerodynamic forces acts forward and partially reduces the reliance on existing propulsion systems. Wind tunnel tests have been carried out at the de Havilland wind tunnel at the University of Glasgow on a 1:20 scale model of this system. The tests aim to understand the airflow characteristics around the aerofoil and investigate the approximate lift and drag coefficients that an early iteration of the SWAP device may produce. The data exhibits clear trends of increasing lift as injection momentum increases, with critical flow attachment points being identified at specific combinations of jet momentum coefficient, Cµ, and angle of attack, AOA. Various combinations of flow conditions were tested, with the jet momentum coefficient ranging from 0 to 0.7 and the AOA ranging from 0° to 35°. The Reynolds number across the tested conditions ranged from 80,000 to 240,000. Comparisons between 2D computational fluid dynamics (CFD) simulations and the experimental data are presented for multiple Reynolds-Averaged Navier-Stokes (RANS) turbulence models in the form of normalised surface pressure comparisons. These show good agreement for most of the tested cases. However, certain simulation conditions exhibited a well-documented shortcoming of RANS-based turbulence models for circulation control flows and over-predicted surface pressures and lift coefficient for fully attached flow cases. Work must be continued in finding an all-encompassing modelling approach which predicts surface pressures well for all combinations of jet injection momentum and AOA.

Keywords: CFD, circulation control, Coanda, turbo wing sail, wind tunnel

Procedia PDF Downloads 122
9598 Influence Zone of Strip Footing on Untreated and Cement Treated Sand Mat Underlain by Soft Clay (2nd reviewed)

Authors: Sharifullah Ahmed

Abstract:

Shallow foundation on soft soils without ground improvement can represent a high level of settlement. In such a case, an alternative to pile foundations may be shallow strip footings placed on a soil system in which the upper layer is untreated or cement-treated compacted sand to limit the settlement within a permissible level. This research work deals with a rigid plane-strain strip footing of 2.5m width placed on a soil consisting of untreated or cement treated sand layer underlain by homogeneous soft clay. Both the thin and thick compared the footing width was considered. The soft inorganic cohesive NC clay layer is considered undrained for plastic loading stages and drained in consolidation stages, and the sand layer is drained in all loading stages. FEM analysis was done using PLAXIS 2D Version 8.0 with a model consisting of clay deposits of 15m thickness and 18m width. The soft clay layer was modeled using the Hardening Soil Model, Soft Soil Model, Soft Soil Creep model, and the upper improvement layer was modeled using only the Hardening Soil Model. The system is considered fully saturated. The value of natural void ratio 1.2 is used. Total displacement fields of strip footing and subsoil layers in the case of Untreated and Cement treated Sand as Upper layer are presented. For Hi/B =0.6 or above, the distribution of major deformation within an upper layer and the influence zone of footing is limited in an upper layer which indicates the complete effectiveness of the upper layer in bearing the foundation effectively in case of the untreated upper layer. For Hi/B =0.3 or above, the distribution of major deformation occurred within an upper layer, and the function of footing is limited in the upper layer. This indicates the complete effectiveness of the cement-treated upper layer. Brittle behavior of cemented sand and fracture or cracks is not considered in this analysis.

Keywords: displacement, ground improvement, influence depth, PLAXIS 2D, primary and secondary settlement, sand mat, soft clay

Procedia PDF Downloads 76
9597 Challenges to Quality Primary Health Care in Saudi Arabia and Potential Improvements Implemented by Other Systems

Authors: Hilal Al Shamsi, Abdullah Almutairi

Abstract:

Introduction: As primary healthcare centres play an important role in implementing Saudi Arabia’s health strategy, this paper offers a review of publications on the quality of the country’s primary health care. With the aim of deciding on solutions for improvement, it provides an overview of healthcare quality in this context and indicates barriers to quality. Method: Using two databases, ProQuest and Scopus, data extracted from published articles were systematically analysed for determining the care quality in Saudi primary health centres and obstacles to achieving higher quality. Results: Twenty-six articles met the criteria for inclusion in this review. The components of healthcare quality were examined in terms of the access to and effectiveness of interpersonal and clinical care. Good access and effective care were identified in such areas as maternal health care and the control of epidemic diseases, whereas poor access and effectiveness of care were shown for chronic disease management programmes, referral patterns (in terms of referral letters and feedback reports), health education and interpersonal care (in terms of language barriers). Several factors were identified as barriers to high-quality care. These included problems with evidence-based practice implementation, professional development, the use of referrals to secondary care and organisational culture. Successful improvements have been implemented by other systems, such as mobile medical units, electronic referrals, online translation tools and mobile devices and their applications; these can be implemented in Saudi Arabia for improving the quality of the primary healthcare system in this country. Conclusion: The quality of primary health care in Saudi Arabia varies among the different services. To improve quality, management programmes and organisational culture must be promoted in primary health care. Professional development strategies are also needed for improving the skills and knowledge of healthcare professionals. Potential improvements can be implemented to improve the quality of the primary health system.

Keywords: quality, primary health care, Saudi Arabia, health centres, general medical

Procedia PDF Downloads 173
9596 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 277
9595 Quantum Teleportation Using W-BELL and Bell-GHZ Channels

Authors: Abhinav Pandey

Abstract:

Teleportation is the transfer of information between two particles without physically being in contact with each other. It has been around in Quantum computation and has been used in theoretical physics. Using the Entangled pair we can achieve teleportation up to 100% out of probable measurements. We introduce a 5-qubit general entanglement system using W-BELL and BELL-GHZ channel pairs and show its usefulness in teleportation. In this paper, we use these channels to achieve teleportation probabilistically conventionally through nonteleporting channels, which has never been achieved before. In this paper, we compare and determine which channel is better in terms of probabilistic results of teleportation of single qubits using W-Bell and Bell-GHZ channels.

Keywords: entanglement, teleportation, no cloning theorem, quantum mechanics, probability

Procedia PDF Downloads 25
9594 Fire Resilient Cities: The Impact of Fire Regulations, Technological and Community Resilience

Authors: Fanny Guay

Abstract:

Building resilience, sustainable buildings, urbanization, climate change, resilient cities, are just a few examples of where the focus of research has been in the last few years. It is obvious that there is a need to rethink how we are building our cities and how we are renovating our existing buildings. However, the question remaining is how can we assure that we are building sustainable yet resilient cities? There are many aspects one can touch upon when discussing resilience in cities, but after the event of Grenfell in June 2017, it has become clear that fire resilience must be a priority. We define resilience as a holistic approach including communities, society and systems, focusing not only on resisting the effects of a disaster, but also how it will cope and recover from it. Cities are an example of such a system, where components such as buildings have an important role to play. A building on fire will have an impact on the community, the economy, the environment, and so the entire system. Therefore, we believe that fire and resilience go hand in hand when we discuss building resilient cities. This article aims at discussing the current state of the concept of fire resilience and suggests actions to support the built of more fire resilient buildings. Using the case of Grenfell and the fire safety regulations in the UK, we will briefly compare the fire regulations in other European countries, more precisely France, Germany and Denmark, to underline the difference and make some suggestions to increase fire resilience via regulation. For this research, we will also include other types of resilience such as technological resilience, discussing the structure of buildings itself, as well as community resilience, considering the role of communities in building resilience. Our findings demonstrate that to increase fire resilience, amending existing regulations might be necessary, for example, how we performed reaction to fire tests and how we classify building products. However, as we are looking at national regulations, we are only able to make general suggestions for improvement. Another finding of this research is that the capacity of the community to recover and adapt after a fire is also an essential factor. Fundamentally, fire resilience, technological resilience and community resilience are closely connected. Building resilient cities is not only about sustainable buildings or energy efficiency; it is about assuring that all the aspects of resilience are included when building or renovating buildings. We must ask ourselves questions as: Who are the users of this building? Where is the building located? What are the components of the building, how was it designed and which construction products have been used? If we want to have resilient cities, we must answer these basic questions and assure that basic factors such as fire resilience are included in our assessment.

Keywords: buildings, cities, fire, resilience

Procedia PDF Downloads 145
9593 Identifying, Reporting and Preventing Medical Errors Among Nurses Working in Critical Care Units At Kenyatta National Hospital, Kenya: Closing the Gap Between Attitude and Practice

Authors: Jared Abuga, Wesley Too

Abstract:

Medical error is the third leading cause of death in US, with approximately 98,000 deaths occurring every year as a result of medical errors. The world financial burden of medication errors is roughly USD 42 billion. Medication errors may lead to at least one death daily and injure roughly 1.3 million people every year. Medical error reporting is essential in creating a culture of accountability in our healthcare system. Studies have shown that attitudes and practice of healthcare workers in reporting medical errors showed that the major factors in under-reporting of errors included work stress and fear of medico-legal consequences due to the disclosure of error. Further, the majority believed that increase in reporting medical errors would contribute to a better system. Most hospitals depend on nurses to discover medication errors because they are considered to be the sources of these errors, as contributors or mere observers, consequently, the nurse’s perception of medication errors and what needs to be done is a vital feature to reducing incidences of medication errors. We sought to explore knowledge among nurses on medical errors and factors affecting or hindering reporting of medical errors among nurses working at the emergency unit, KNH. Critical care nurses are faced with many barriers to completing incident reports on medication errors. One of these barriers which contribute to underreporting is a lack of education and/or knowledge regarding medication errors and the reporting process. This study, therefore, sought to determine the availability and the use of reporting systems for medical errors in critical care unity. It also sought to establish nurses’ perception regarding medical errors and reporting and document factors facilitating timely identification and reporting of medical errors in critical care settings. Methods: The study used cross-section study design to collect data from 76 critical care nurses from Kenyatta Teaching & Research National Referral Hospital, Kenya. Data analysis and results is ongoing. By October 2022, we will have analysis, results, discussions, and recommendations of the study for purposes of the conference in 2023

Keywords: errors, medical, kenya, nurses, safety

Procedia PDF Downloads 221
9592 The Cost-Effectiveness of Pancreatic Surgical Cancer Care in the US vs. the European Union: Results of a Review of the Peer-Reviewed Scientific Literature

Authors: Shannon Hearney, Jeffrey Hoch

Abstract:

While all cancers are costly to treat, pancreatic cancer is a notoriously costly and deadly form of cancer. Across the world there are a variety of treatment centers ranging from small clinics to large, high-volume hospitals as well as differing structures of payment and access. It has been noted that centers that treat a high volume of pancreatic cancer patients have higher quality of care, it is unclear if that care is cost-effective. In the US there is no clear consensus on the cost-effectiveness of high-volume centers for the surgical care of pancreatic cancer. Other European countries, like Finland and Italy have shown that high-volume centers have lower mortality rates and can have lower costs, there however, is still a gap in knowledge about these centers cost-effectiveness globally. This paper seeks to review the current literature in Europe and the US to gain a better understanding of the state of high-volume pancreatic surgical centers cost-effectiveness while considering the contextual differences in health system structure. A review of major reference databases such as Medline, Embase and PubMed will be conducted for cost-effectiveness studies on the surgical treatment of pancreatic cancer at high-volume centers. Possible MeSH terms to be included, but not limited to, are: “pancreatic cancer”, “cost analysis”, “cost-effectiveness”, “economic evaluation”, “pancreatic neoplasms”, “surgical”, “Europe” “socialized medicine”, “privatized medicine”, “for-profit”, and “high-volume”. Studies must also have been available in the English language. This review will encompass European scientific literature, as well as those in the US. Based on our preliminary findings, we anticipate high-volume hospitals to provide better care at greater costs. We anticipate that high-volume hospitals may be cost-effective in different contexts depending on the national structure of a healthcare system. Countries with more centralized and socialized healthcare may yield results that are more cost-effective. High-volume centers may differ in their cost-effectiveness of the surgical care of pancreatic cancer internationally especially when comparing those in the United States to others throughout Europe.

Keywords: cost-effectiveness analysis, economic evaluation, pancreatic cancer, scientific literature review

Procedia PDF Downloads 76
9591 The Relationship between Creative Imagination and Curriculum

Authors: Faride Hashemiannejad, Shima Oloomi

Abstract:

Imagination is one of the important elements of creative thinking which as a skill needs attention by the educational system. Although most students learn reading, writing, and arithmetic skills well, they lack high level thinking skills like creative thinking. Therefore, in the information age and in the beginning of entry to knowledge-based society, the educational system needs to think over its goals and mission, and concentrate on creativity-based curriculum. From among curriculum elements-goals, content, method and evaluation “method” is a major domain whose reform can pave the way for fostering imagination and creativity. The purpose of this study was examining the relationship between creativity development and curriculum. Research questions were: (1) is there a relationship between the cognitive-emotional structure of the classroom and creativity development? (2) Is there a relationship between the environmental-social structure of the classroom and creativity development? (3) Is there a relationship between the thinking structure of the classroom and creativity development? (4) Is there a relationship between the physical structure of the classroom and creativity development? (5) Is there a relationship between the instructional structure of the classroom and creativity development? Method: This research is a applied research and the research method is Correlational research. Participants: The total number of participants in this study included 894 students from High school through 11th grade from seven schools of seven zones in Mashad city. Sampling Plan: Sampling was selected based on Random Multi State. Measurement: The dependent measure in this study was: (a) the Test of Creative Thinking, (b) The researcher-made questionnaire includes five fragments, cognitive, emotional structure, environmental social structure, thinking structure, physical structure, and instructional structure. The Results Show: There was significant relationship between the cognitive-emotional structure of the classroom and student’s creativity development (sig=0.139). There was significant relationship between the environmental-social structure of the classroom and student’s creativity development (sig=0.006). There was significant relationship between the thinking structure of the classroom and student’s creativity development (sig=0.004). There was not significant relationship between the physical structure of the classroom and student’s creativity development (sig=0.215). There was significant relationship between the instructional structure of the classroom and student’s creativity development (sig=0.003). These findings denote if students feel secure, calm and confident, they can experience creative learning. Also the quality of coping with students’ questions, imaginations and risks can influence on their creativity development.

Keywords: imagination, creativity, curriculum, bioinformatics, biomedicine

Procedia PDF Downloads 462
9590 Pursuing Knowledge Society Excellence: Knowledge Management and Open Innovation Platforms for Research, Industry and Business Collaboration in Singapore

Authors: Irina-Emily Hansen, Ola Jon Mork

Abstract:

The European economic growth strategy and supporting it framework for research and innovation highlight the importance of nurturing new open innovation in order to strengthen Europe’s competitiveness. One of the main approaches to enhance innovation in European society is the Triple Helix model that centres on science- industry collaboration where the universities are assigned the managerial role. In spite of the defined collaboration strategy, the collaboration between academics and in-dustry in Europe has still many challenges. Many of them are explained by culture difference: academic culture aims towards scientific knowledge, while businesses are oriented towards pro-duction and profitable results; also execution of collaborative projects is seen differently by part-ners involved. That proves that traditional management strategies applied to collaboration between researchers and businesses are not effective. There is a need for dynamic strategies that can support the interaction between researchers and industry intensifying knowledge co-creation and contributing to development of national innovation system (NIS) by incorporating individual, organizational and inter-organizational learning. In order to find a good subject to follow, the researchers of a given paper have investigated one of the most rapidly developing knowledge-based, innovation society, Singapore. Singapore does not dispose much land- or sea- resources that normally provide income for any country. Therefore, Singapore was forced to think differently and build society on resources that are available: talented people and knowledge. Singapore has during the last twenty years developed attracting high rated university camps, research institutions and leading industrial companies from all over the world. This article elucidates and elaborates Singapore’s national innovation strategies from Knowledge Management perspective. The research is done on the variety of organizations that enable and support knowledge development in this state: governmental research and development (R&D) centers in universities, private talent incubators for entrepreneurs, and industrial companies with own R&D departments. The research methods are based on presentations, documents, and visits at a number of universities, research institutes, innovation parks, governmental institutions, industrial companies and innovation exhibitions in Singapore. In addition, a literature review of science articles is made regarding the topic. The first finding is that objectives of collaboration between researchers, entrepreneurs and industry in Singapore correspond primary goals of the state: knowledge- and economy growth. There are common objectives for all stakeholders on all national levels. The second finding is that Singapore has enabled system on a national level that supports innovation the entire way from fostering or capturing the new knowledge, providing knowledge exchange and co-creation to application of it in real-life. The conclusion is that innovation means not only new idea, but also the enabling mechanism for its execution and the marked-oriented approach in order that new knowledge can be absorbed in society. The future research can be done with regards to application of Singapore knowledge management strategy in innovation to European countries.

Keywords: knowledge management strategy, national innovation system, research industry and business collaboration, knowledge enabling

Procedia PDF Downloads 166
9589 Unsupervised Part-of-Speech Tagging for Amharic Using K-Means Clustering

Authors: Zelalem Fantahun

Abstract:

Part-of-speech tagging is the process of assigning a part-of-speech or other lexical class marker to each word into naturally occurring text. Part-of-speech tagging is the most fundamental and basic task almost in all natural language processing. In natural language processing, the problem of providing large amount of manually annotated data is a knowledge acquisition bottleneck. Since, Amharic is one of under-resourced language, the availability of tagged corpus is the bottleneck problem for natural language processing especially for POS tagging. A promising direction to tackle this problem is to provide a system that does not require manually tagged data. In unsupervised learning, the learner is not provided with classifications. Unsupervised algorithms seek out similarity between pieces of data in order to determine whether they can be characterized as forming a group. This paper explicates the development of unsupervised part-of-speech tagger using K-Means clustering for Amharic language since large amount of data is produced in day-to-day activities. In the development of the tagger, the following procedures are followed. First, the unlabeled data (raw text) is divided into 10 folds and tokenization phase takes place; at this level, the raw text is chunked at sentence level and then into words. The second phase is feature extraction which includes word frequency, syntactic and morphological features of a word. The third phase is clustering. Among different clustering algorithms, K-means is selected and implemented in this study that brings group of similar words together. The fourth phase is mapping, which deals with looking at each cluster carefully and the most common tag is assigned to a group. This study finds out two features that are capable of distinguishing one part-of-speech from others these are morphological feature and positional information and show that it is possible to use unsupervised learning for Amharic POS tagging. In order to increase performance of the unsupervised part-of-speech tagger, there is a need to incorporate other features that are not included in this study, such as semantic related information. Finally, based on experimental result, the performance of the system achieves a maximum of 81% accuracy.

Keywords: POS tagging, Amharic, unsupervised learning, k-means

Procedia PDF Downloads 425
9588 Embodied Cognition as a Concept of Educational Neuroscience and Phenomenology

Authors: Elham Shirvani-Ghadikolaei

Abstract:

In this paper, we examine the connection between the human mind and body within the framework of Merleau-Ponty's phenomenology. We study the role of this connection in designing more efficient learning environments, alongside the findings in physical recognition and educational neuroscience. Our research shows the interplay between the mind and the body in the external world and discusses its implications. Based on these observations, we make suggestions as to how the educational system can benefit from taking into account the interaction between the mind and the body in educational affairs.

Keywords: educational neurosciences, embodied cognition, pedagogical neurosciences, phenomenology

Procedia PDF Downloads 293
9587 Formulating a Definition of Hate Speech: From Divergence to Convergence

Authors: Avitus A. Agbor

Abstract:

Numerous incidents, ranging from trivial to catastrophic, do come to mind when one reflects on hate. The victims of these belong to specific identifiable groups within communities. These experiences evoke discussions on Islamophobia, xenophobia, homophobia, anti-Semitism, racism, ethnic hatred, atheism, and other brutal forms of bigotry. Common to all these is an invisible but portent force that drives all of them: hatred. Such hatred is usually fueled by a profound degree of intolerance (to diversity) and the zeal to impose on others their beliefs and practices which they consider to be the conventional norm. More importantly, the perpetuation of these hateful acts is the unfortunate outcome of an overplay of invectives and hate speech which, to a greater extent, cannot be divorced from hate. From a legal perspective, acknowledging the existence of an undeniable link between hate speech and hate is quite easy. However, both within and without legal scholarship, the notion of “hate speech” remains a conundrum: a phrase that is quite easily explained through experiences than propounding a watertight definition that captures the entire essence and nature of what it is. The problem is further compounded by a few factors: first, within the international human rights framework, the notion of hate speech is not used. In limiting the right to freedom of expression, the ICCPR simply excludes specific kinds of speeches (but does not refer to them as hate speech). Regional human rights instruments are not so different, except for the subsequent developments that took place in the European Union in which the notion has been carefully delineated, and now a much clearer picture of what constitutes hate speech is provided. The legal architecture in domestic legal systems clearly shows differences in approaches and regulation: making it more difficult. In short, what may be hate speech in one legal system may very well be acceptable legal speech in another legal system. Lastly, the cornucopia of academic voices on the issue of hate speech exude the divergence thereon. Yet, in the absence of a well-formulated and universally acceptable definition, it is important to consider how hate speech can be defined. Taking an evidence-based approach, this research looks into the issue of defining hate speech in legal scholarship and how and why such a formulation is of critical importance in the prohibition and prosecution of hate speech.

Keywords: hate speech, international human rights law, international criminal law, freedom of expression

Procedia PDF Downloads 49
9586 Using Business Intelligence Capabilities to Improve the Quality of Decision-Making: A Case Study of Mellat Bank

Authors: Jalal Haghighat Monfared, Zahra Akbari

Abstract:

Today, business executives need to have useful information to make better decisions. Banks have also been using information tools so that they can direct the decision-making process in order to achieve their desired goals by rapidly extracting information from sources with the help of business intelligence. The research seeks to investigate whether there is a relationship between the quality of decision making and the business intelligence capabilities of Mellat Bank. Each of the factors studied is divided into several components, and these and their relationships are measured by a questionnaire. The statistical population of this study consists of all managers and experts of Mellat Bank's General Departments (including 190 people) who use commercial intelligence reports. The sample size of this study was 123 randomly determined by statistical method. In this research, relevant statistical inference has been used for data analysis and hypothesis testing. In the first stage, using the Kolmogorov-Smirnov test, the normalization of the data was investigated and in the next stage, the construct validity of both variables and their resulting indexes were verified using confirmatory factor analysis. Finally, using the structural equation modeling and Pearson's correlation coefficient, the research hypotheses were tested. The results confirmed the existence of a positive relationship between decision quality and business intelligence capabilities in Mellat Bank. Among the various capabilities, including data quality, correlation with other systems, user access, flexibility and risk management support, the flexibility of the business intelligence system was the most correlated with the dependent variable of the present research. This shows that it is necessary for Mellat Bank to pay more attention to choose the required business intelligence systems with high flexibility in terms of the ability to submit custom formatted reports. Subsequently, the quality of data on business intelligence systems showed the strongest relationship with quality of decision making. Therefore, improving the quality of data, including the source of data internally or externally, the type of data in quantitative or qualitative terms, the credibility of the data and perceptions of who uses the business intelligence system, improves the quality of decision making in Mellat Bank.

Keywords: business intelligence, business intelligence capability, decision making, decision quality

Procedia PDF Downloads 99
9585 Risk-Sharing Financing of Islamic Banks: Better Shielded against Interest Rate Risk

Authors: Mirzet SeHo, Alaa Alaabed, Mansur Masih

Abstract:

In theory, risk-sharing-based financing (RSF) is considered a corner stone of Islamic finance. It is argued to render Islamic banks more resilient to shocks. In practice, however, this feature of Islamic financial products is almost negligible. Instead, debt-based instruments, with conventional like features, have overwhelmed the nascent industry. In addition, the framework of present-day economic, regulatory and financial reality inevitably exposes Islamic banks in dual banking systems to problems of conventional banks. This includes, but is not limited to, interest rate risk. Empirical evidence has, thus far, confirmed such exposures, despite Islamic banks’ interest-free operations. This study applies system GMM in modeling the determinants of RSF, and finds that RSF is insensitive to changes in interest rates. Hence, our results provide support to the “stability” view of risk-sharing-based financing. This suggests RSF as the way forward for risk management at Islamic banks, in the absence of widely acceptable Shariah compliant hedging instruments. Further support to the stability view is given by evidence of counter-cyclicality. Unlike debt-based lending that inflates artificial asset bubbles through credit expansion during the upswing of business cycles, RSF is negatively related to GDP growth. Our results also imply a significantly strong relationship between risk-sharing deposits and RSF. However, the pass-through of these deposits to RSF is economically low. Only about 40% of risk-sharing deposits are channeled to risk-sharing financing. This raises questions on the validity of the industry’s claim that depositors accustomed to conventional banking shun away from risk sharing and signals potential for better balance sheet management at Islamic banks. Overall, our findings suggest that, on the one hand, Islamic banks can gain ‘independence’ from conventional banks and interest rates through risk-sharing products, the potential for which is enormous. On the other hand, RSF could enable policy makers to improve systemic stability and restrain excessive credit expansion through its countercyclical features.

Keywords: Islamic banks, risk-sharing, financing, interest rate, dynamic system GMM

Procedia PDF Downloads 302
9584 Informing, Enabling and Inspiring Social Innovation by Geographic Systems Mapping: A Case Study in Workforce Development

Authors: Cassandra A. Skinner, Linda R. Chamberlain

Abstract:

The nonprofit and public sectors are increasingly turning to Geographic Information Systems for data visualizations which can better inform programmatic and policy decisions. Additionally, the private and nonprofit sectors are turning to systems mapping to better understand the ecosystems within which they operate. This study explores the potential which combining these data visualization methods—a method which is called geographic systems mapping—to create an exhaustive and comprehensive understanding of a social problem’s ecosystem may have in social innovation efforts. Researchers with Grand Valley State University collaborated with Talent 2025 of West Michigan to conduct a mixed-methods research study to paint a comprehensive picture of the workforce development ecosystem in West Michigan. Using semi-structured interviewing, observation, secondary research, and quantitative analysis, data were compiled on workforce development organizations’ locations, programming, metrics for success, partnerships, funding sources, and service language. To best visualize and disseminate the data, a geographic system map was created which identifies programmatic, operational, and geographic gaps in workforce development services of West Michigan. By combining geographic and systems mapping methods, the geographic system map provides insight into the cross-sector relationships, collaboration, and competition which exists among and between workforce development organizations. These insights identify opportunities for and constraints around cross-sectoral social innovation in the West Michigan workforce development ecosystem. This paper will discuss the process utilized to prepare the geographic systems map, explain the results and outcomes, and demonstrate how geographic systems mapping illuminated the needs of the community and opportunities for social innovation. As complicated social problems like unemployment often require cross-sectoral and multi-stakeholder solutions, there is potential for geographic systems mapping to be a tool which informs, enables, and inspires these solutions.

Keywords: cross-sector collaboration, data visualization, geographic systems mapping, social innovation, workforce development

Procedia PDF Downloads 276
9583 Introduction of an Approach of Complex Virtual Devices to Achieve Device Interoperability in Smart Building Systems

Authors: Thomas Meier

Abstract:

One of the major challenges for sustainable smart building systems is to support device interoperability, i.e. connecting sensor or actuator devices from different vendors, and present their functionality to the external applications. Furthermore, smart building systems are supposed to connect with devices that are not available yet, i.e. devices that become available on the market sometime later. It is of vital importance that a sustainable smart building platform provides an appropriate external interface that can be leveraged by external applications and smart services. An external platform interface must be stable and independent of specific devices and should support flexible and scalable usage scenarios. A typical approach applied in smart home systems is based on a generic device interface used within the smart building platform. Device functions, even of rather complex devices, are mapped to that generic base type interface by means of specific device drivers. Our new approach, presented in this work, extends that approach by using the smart building system’s rule engine to create complex virtual devices that can represent the most diverse properties of real devices. We examined and evaluated both approaches by means of a practical case study using a smart building system that we have developed. We show that the solution we present allows the highest degree of flexibility without affecting external application interface stability and scalability. In contrast to other systems our approach supports complex virtual device configuration on application layer (e.g. by administration users) instead of device configuration at platform layer (e.g. platform operators). Based on our work, we can show that our approach supports almost arbitrarily flexible use case scenarios without affecting the external application interface stability. However, the cost of this approach is additional appropriate configuration overhead and additional resource consumption at the IoT platform level that must be considered by platform operators. We conclude that the concept of complex virtual devices presented in this work can be applied to improve the usability and device interoperability of sustainable intelligent building systems significantly.

Keywords: Internet of Things, smart building, device interoperability, device integration, smart home

Procedia PDF Downloads 247
9582 Kalman Filter for Bilinear Systems with Application

Authors: Abdullah E. Al-Mazrooei

Abstract:

In this paper, we present a new kind of the bilinear systems in the form of state space model. The evolution of this system depends on the product of state vector by its self. The well known Lotak Volterra and Lorenz models are special cases of this new model. We also present here a generalization of Kalman filter which is suitable to work with the new bilinear model. An application to real measurements is introduced to illustrate the efficiency of the proposed algorithm.

Keywords: bilinear systems, state space model, Kalman filter, application, models

Procedia PDF Downloads 411
9581 Cotton Transplantation as a Practice to Escape Infection with Some Soil-Borne Pathogens

Authors: E. M. H. Maggie, M. N. A. Nazmey, M. A. Abdel-Sattar, S. A. Saied

Abstract:

A successful trial of transplanting cotton is reported. Seeds grown in trays for 4-5 weeks in an easily prepared supporting medium such as peat moss or similar plant waste are tried. Careful transplanting of seedlings, with root system as intact as possible, is being made in the permanent field. The practice reduced damping-off incidence rate and allowed full winter crop revenues. Further work is needed to evaluate certain parameters such as growth curve, flowering curve, and yield at economic bases.

Keywords: cotton, transplanting cotton, damping-off diseases, environment sciences

Procedia PDF Downloads 342
9580 Perfomance of PAPR Reduction in OFDM System for Wireless Communications

Authors: Alcardo Alex Barakabitze, Saddam Aziz, Muhammad Zubair

Abstract:

The Orthogonal Frequency Division Multiplexing (OFDM) is a special form of multicarrier transmission that splits the total transmission bandwidth into a number of orthogonal and non-overlapping subcarriers and transmit the collection of bits called symbols in parallel using these subcarriers. In this paper, we explore the Peak to Average Power Reduction (PAPR) problem in OFDM systems. We provide the performance analysis of CCDF and BER through MATLAB simulations.

Keywords: bit error ratio (BER), OFDM, peak to average power reduction (PAPR), sub-carriers

Procedia PDF Downloads 521
9579 The Effects of Billboard Content and Visible Distance on Driver Behavior

Authors: Arsalan Hassan Pour, Mansoureh Jeihani, Samira Ahangari

Abstract:

Distracted driving has been one of the most integral concerns surrounding our daily use of vehicles since the invention of the automobile. While much attention has been recently given to cell phones related distraction, commercial billboards along roads are also candidates for drivers' visual and cognitive distractions, as they may take drivers’ eyes from the road and their minds off the driving task to see, perceive and think about the billboard’s content. Using a driving simulator and a head-mounted eye-tracking system, speed change, acceleration, deceleration, throttle response, collision, lane changing, and offset from the center of the lane data along with gaze fixation duration and frequency data were collected in this study. Some 92 participants from a fairly diverse sociodemographic background drove on a simulated freeway in Baltimore, Maryland area and were exposed to three different billboards to investigate the effects of billboards on drivers’ behavior. Participants glanced at the billboards several times with different frequencies, the maximum of which occurred on the billboard with the highest cognitive load. About 74% of the participants didn’t look at billboards for more than two seconds at each glance except for the billboard with a short visible area. Analysis of variance (ANOVA) was performed to find the variations in driving behavior when they are invisible, readable, and post billboards area. The results show a slight difference in speed, throttle, brake, steering velocity, and lane changing, among different areas. Brake force and deviation from the center of the lane increased in the readable area in comparison with the visible area, and speed increased right after each billboard. The results indicated that billboards have a significant effect on driving performance and visual attention based on their content and visibility status. Generalized linear model (GLM) analysis showed no connection between participants’ age and driving experience with gaze duration. However, the visible distance of the billboard, gender, and billboard content had a significant effect on gaze duration.

Keywords: ANOVA, billboards, distracted driving, drivers' behavior, driving simulator, eye-Tracking system, GLM

Procedia PDF Downloads 114
9578 The Development of a Digitally Connected Factory Architecture to Enable Product Lifecycle Management for the Assembly of Aerostructures

Authors: Nicky Wilson, Graeme Ralph

Abstract:

Legacy aerostructure assembly is defined by large components, low build rates, and manual assembly methods. With an increasing demand for commercial aircraft and emerging markets such as the eVTOL (electric vertical take-off and landing) market, current methods of manufacturing are not capable of efficiently hitting these higher-rate demands. This project will look at how legacy manufacturing processes can be rate enabled by taking a holistic view of data usage, focusing on how data can be collected to enable fully integrated digital factories and supply chains. The study will focus on how data is flowed both up and down the supply chain to create a digital thread specific to each part and assembly while enabling machine learning through real-time, closed-loop feedback systems. The study will also develop a bespoke architecture to enable connectivity both within the factory and the wider PLM (product lifecycle management) system, moving away from traditional point-to-point systems used to connect IO devices to a hub and spoke architecture that will exploit report-by-exception principles. This paper outlines the key issues facing legacy aircraft manufacturers, focusing on what future manufacturing will look like from adopting Industry 4 principles. The research also defines the data architecture of a PLM system to enable the transfer and control of a digital thread within the supply chain and proposes a standardised communications protocol to enable a scalable solution to connect IO devices within a production environment. This research comes at a critical time for aerospace manufacturers, who are seeing a shift towards the integration of digital technologies within legacy production environments, while also seeing build rates continue to grow. It is vital that manufacturing processes become more efficient in order to meet these demands while also securing future work for many manufacturers.

Keywords: Industry 4, digital transformation, IoT, PLM, automated assembly, connected factories

Procedia PDF Downloads 57
9577 Rendering Cognition Based Learning in Coherence with Development within the Context of PostgreSQL

Authors: Manuela Nayantara Jeyaraj, Senuri Sucharitharathna, Chathurika Senarath, Yasanthy Kanagaraj, Indraka Udayakumara

Abstract:

PostgreSQL is an Object Relational Database Management System (ORDBMS) that has been in existence for a while. Despite the superior features that it wraps and packages to manage database and data, the database community has not fully realized the importance and advantages of PostgreSQL. Hence, this research tends to focus on provisioning a better environment of development for PostgreSQL in order to induce the utilization and elucidate the importance of PostgreSQL. PostgreSQL is also known to be the world’s most elementary SQL-compliant open source ORDBMS. But, users have not yet resolved to PostgreSQL due to the facts that it is still under the layers and the complexity of its persistent textual environment for an introductory user. Simply stating this, there is a dire need to explicate an easy way of making the users comprehend the procedure and standards with which databases are created, tables and the relationships among them, manipulating queries and their flow based on conditions in PostgreSQL to help the community resolve to PostgreSQL at an augmented rate. Hence, this research under development within the context tends to initially identify the dominant features provided by PostgreSQL over its competitors. Following the identified merits, an analysis on why the database community holds a hesitance in migrating to PostgreSQL’s environment will be carried out. These will be modulated and tailored based on the scope and the constraints discovered. The resultant of the research proposes a system that will serve as a designing platform as well as a learning tool that will provide an interactive method of learning via a visual editor mode and incorporate a textual editor for well-versed users. The study is based on conjuring viable solutions that analyze a user’s cognitive perception in comprehending human computer interfaces and the behavioural processing of design elements. By providing a visually draggable and manipulative environment to work with Postgresql databases and table queries, it is expected to highlight the elementary features displayed by Postgresql over any other existent systems in order to grasp and disseminate the importance and simplicity offered by this to a hesitant user.

Keywords: cognition, database, PostgreSQL, text-editor, visual-editor

Procedia PDF Downloads 258
9576 Achieving Maximum Performance through the Practice of Entrepreneurial Ethics: Evidence from SMEs in Nigeria

Authors: S. B. Tende, H. L. Abubakar

Abstract:

It is acknowledged that small and medium enterprises (SMEs) may encounter different ethical issues and pressures that could affect the way in which they strategize or make decisions concerning the outcome of their business. Therefore, this research aimed at assessing entrepreneurial ethics in the business of SMEs in Nigeria. Secondary data were adopted as source of corpus for the analysis. The findings conclude that a sound entrepreneurial ethics system has a significant effect on the level of performance of SMEs in Nigeria. The Nigerian Government needs to provide both guiding and physical structures; as well as learning systems that could inculcate these entrepreneurial ethics.

Keywords: culture, entrepreneurial ethics, performance, SME

Procedia PDF Downloads 359
9575 Technical and Practical Aspects of Sizing a Autonomous PV System

Authors: Abdelhak Bouchakour, Mustafa Brahami, Layachi Zaghba

Abstract:

The use of photovoltaic energy offers an inexhaustible supply of energy but also a clean and non-polluting energy, which is a definite advantage. The geographical location of Algeria promotes the development of the use of this energy. Indeed, given the importance of the intensity of the radiation received and the duration of sunshine. For this reason, the objective of our work is to develop a data-processing tool (software) of calculation and optimization of dimensioning of the photovoltaic installations. Our approach of optimization is basing on mathematical models, which amongst other things describe the operation of each part of the installation, the energy production, the storage and the consumption of energy.

Keywords: solar panel, solar radiation, inverter, optimization

Procedia PDF Downloads 586