Search results for: handwriting recognition system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18448

Search results for: handwriting recognition system

9538 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 129
9537 Multi-Agent System Based Solution for Operating Agile and Customizable Micro Manufacturing Systems

Authors: Dylan Santos De Pinho, Arnaud Gay De Combes, Matthieu Steuhlet, Claude Jeannerat, Nabil Ouerhani

Abstract:

The Industry 4.0 initiative has been launched to address huge challenges related to ever-smaller batch sizes. The end-user need for highly customized products requires highly adaptive production systems in order to keep the same efficiency of shop floors. Most of the classical Software solutions that operate the manufacturing processes in a shop floor are based on rigid Manufacturing Execution Systems (MES), which are not capable to adapt the production order on the fly depending on changing demands and or conditions. In this paper, we present a highly modular and flexible solution to orchestrate a set of production systems composed of a micro-milling machine-tool, a polishing station, a cleaning station, a part inspection station, and a rough material store. The different stations are installed according to a novel matrix configuration of a 3x3 vertical shelf. The different cells of the shelf are connected through horizontal and vertical rails on which a set of shuttles circulate to transport the machined parts from a station to another. Our software solution for orchestrating the tasks of each station is based on a Multi-Agent System. Each station and each shuttle is operated by an autonomous agent. All agents communicate with a central agent that holds all the information about the manufacturing order. The core innovation of this paper lies in the path planning of the different shuttles with two major objectives: 1) reduce the waiting time of stations and thus reduce the cycle time of the entire part, and 2) reduce the disturbances like vibration generated by the shuttles, which highly impacts the manufacturing process and thus the quality of the final part. Simulation results show that the cycle time of the parts is reduced by up to 50% compared with MES operated linear production lines while the disturbance is systematically avoided for the critical stations like the milling machine-tool.

Keywords: multi-agent systems, micro-manufacturing, flexible manufacturing, transfer systems

Procedia PDF Downloads 119
9536 Immunoliposome-Mediated Drug Delivery to Plasmodium-Infected and Non-Infected Red Blood Cells as a Dual Therapeutic/Prophylactic Antimalarial Strategy

Authors: Ernest Moles, Patricia Urbán, María Belén Jiménez-Díaz, Sara Viera-Morilla, Iñigo Angulo-Barturen, Maria Antònia Busquets, Xavier Fernàndez-Busquets

Abstract:

Bearing in mind the absence of an effective vaccine against malaria and its severe clinical manifestations causing nearly half a million deaths every year, this disease represents nowadays a major threat to life. Besides, the basic rationale followed by currently marketed antimalarial approaches is based on the administration of drugs on their own, promoting the emergence of drug-resistant parasites owing to the limitation in delivering drug payloads into the parasitized erythrocyte high enough to kill the intracellular pathogen while minimizing the risk of causing toxic side effects to the patient. Such dichotomy has been successfully addressed through the specific delivery of immunoliposome (iLP)-encapsulated antimalarials to Plasmodium falciparum-infected red blood cells (pRBCs). Unfortunately, this strategy has not progressed towards clinical applications, whereas in vitro assays rarely reach drug efficacy improvements above 10-fold. Here, we show that encapsulation efficiencies reaching >96% can be achieved for the weakly basic drugs chloroquine (CQ) and primaquine using the pH gradient active loading method in liposomes composed of neutrally charged, saturated phospholipids. Targeting antibodies are best conjugated through their primary amino groups, adjusting chemical crosslinker concentration to retain significant antigen recognition. Antigens from non-parasitized RBCs have also been considered as targets for the intracellular delivery of drugs not affecting the erythrocytic metabolism. Using this strategy, we have obtained unprecedented nanocarrier targeting to early intraerythrocytic stages of the malaria parasite for which there is a lack of specific extracellular molecular tags. Polyethylene glycol-coated liposomes conjugated with monoclonal antibodies specific for the erythrocyte surface protein glycophorin A (anti-GPA iLP) were capable of targeting 100% RBCs and pRBCs at the low concentration of 0.5 μM total lipid in the culture, with >95% of added iLPs retained into the cells. When exposed for only 15 min to P. falciparum in vitro cultures synchronized at early stages, free CQ had no significant effect over parasite viability up to 200 nM drug, whereas iLP-encapsulated 50 nM CQ completely arrested its growth. Furthermore, when assayed in vivo in P. falciparum-infected humanized mice, anti-GPA iLPs cleared the pathogen below detectable levels at a CQ dose of 0.5 mg/kg. In comparison, free CQ administered at 1.75 mg/kg was, at most, 40-fold less efficient. Our data suggest that this significant improvement in drug antimalarial efficacy is in part due to a prophylactic effect of CQ found by the pathogen in its host cell right at the very moment of invasion.

Keywords: immunoliposomal nanoparticles, malaria, prophylactic-therapeutic polyvalent activity, targeted drug delivery

Procedia PDF Downloads 357
9535 An Argument for Agile, Lean, and Hybrid Project Management in Museum Conservation Practice: A Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts

Authors: Maria Ledinskaya

Abstract:

This paper is part case study and part literature review. It seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation by looking at their practical application on a recent conservation project at the Sainsbury Centre for Visual Arts. The author outlines the advantages of leaner and more agile conservation practices in today’s faster, less certain, and more budget-conscious museum climate where traditional project structures are no longer as relevant or effective. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre by private collectors Michael and Joyce Morris. It was a medium-sized conservation project of moderate complexity, planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown conditions and materials, unconfirmed budget. The project was later impacted by the COVID-19 pandemic, introducing indeterminate lockdowns, budget cuts, staff changes, and the need to accommodate social distancing and remote communications. The author, then a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. The paper examines the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, including the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics. Although not intentionally planned as such, the Morris Project had a number of Agile and Lean features which were instrumental to its successful delivery. These key features are identified as distributed decision-making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point in favour of a hybrid model, which combines traditional and alternative project processes and tools to suit the specific needs of the project.

Keywords: agile project management, conservation, hybrid project management, lean project management, waterfall project management

Procedia PDF Downloads 57
9534 Numerical and Experimental Comparison of Surface Pressures around a Scaled Ship Wind-Assisted Propulsion System

Authors: James Cairns, Marco Vezza, Richard Green, Donald MacVicar

Abstract:

Significant legislative changes are set to revolutionise the commercial shipping industry. Upcoming emissions restrictions will force operators to look at technologies that can improve the efficiency of their vessels -reducing fuel consumption and emissions. A device which may help in this challenge is the Ship Wind-Assisted Propulsion system (SWAP), an actively controlled aerofoil mounted vertically on the deck of a ship. The device functions in a similar manner to a sail on a yacht, whereby the aerodynamic forces generated by the sail reach an equilibrium with the hydrodynamic forces on the hull and a forward velocity results. Numerical and experimental testing of the SWAP device is presented in this study. Circulation control takes the form of a co-flow jet aerofoil, utilising both blowing from the leading edge and suction from the trailing edge. A jet at the leading edge uses the Coanda effect to energise the boundary layer in order to delay flow separation and create high lift with low drag. The SWAP concept has been originated by the research and development team at SMAR Azure Ltd. The device will be retrofitted to existing ships so that a component of the aerodynamic forces acts forward and partially reduces the reliance on existing propulsion systems. Wind tunnel tests have been carried out at the de Havilland wind tunnel at the University of Glasgow on a 1:20 scale model of this system. The tests aim to understand the airflow characteristics around the aerofoil and investigate the approximate lift and drag coefficients that an early iteration of the SWAP device may produce. The data exhibits clear trends of increasing lift as injection momentum increases, with critical flow attachment points being identified at specific combinations of jet momentum coefficient, Cµ, and angle of attack, AOA. Various combinations of flow conditions were tested, with the jet momentum coefficient ranging from 0 to 0.7 and the AOA ranging from 0° to 35°. The Reynolds number across the tested conditions ranged from 80,000 to 240,000. Comparisons between 2D computational fluid dynamics (CFD) simulations and the experimental data are presented for multiple Reynolds-Averaged Navier-Stokes (RANS) turbulence models in the form of normalised surface pressure comparisons. These show good agreement for most of the tested cases. However, certain simulation conditions exhibited a well-documented shortcoming of RANS-based turbulence models for circulation control flows and over-predicted surface pressures and lift coefficient for fully attached flow cases. Work must be continued in finding an all-encompassing modelling approach which predicts surface pressures well for all combinations of jet injection momentum and AOA.

Keywords: CFD, circulation control, Coanda, turbo wing sail, wind tunnel

Procedia PDF Downloads 126
9533 Owning (up to) the 'Art of the Insane': Re-Claiming Personhood through Copyright Law

Authors: Mathilde Pavis

Abstract:

From Schumann to Van Gogh, Frida Kahlo, and Ray Charles, the stories narrating the careers of artists with physical or mental disabilities are becoming increasingly popular. From the emergence of ‘pathography’ at the end of 18th century to cinematographic portrayals, the work and lives of differently-abled creative individuals continue to fascinate readers, spectators and researchers. The achievements of those artists form the tip of the iceberg composed of complex politico-cultural movements which continue to advocate for wider recognition of disabled artists’ contribution to western culture. This paper envisages copyright law as a potential tool to such end. It investigates the array of rights available to artists with intellectual disabilities to assert their position as authors of their artwork in the twenty-first-century looking at international and national copyright laws (UK and US). Put simply, this paper questions whether an artist’s intellectual disability could be a barrier to assert their intellectual property rights over their creation. From a legal perspective, basic principles of non-discrimination would contradict the representation of artists’ disability as an obstacle to authorship as granted by intellectual property laws. Yet empirical studies reveal that artists with intellectual disabilities are often denied the opportunity to exercise their intellectual property rights or any form of agency over their work. In practice, it appears that, unlike other non-disabled artists, the prospect for differently-abled creators to make use of their right is contingent to the context in which the creative process takes place. Often will the management of such rights rest with the institution, art therapist or mediator involved in the artists’ work as the latter will have necessitated greater support than their non-disabled peers for a variety of reasons, either medical or practical. Moreover, the financial setbacks suffered by medical institutions and private therapy practices have renewed administrators’ and physicians’ interest in monetising the artworks produced under their supervision. Adding to those economic incentives, the rise of criminal and civil litigation in psychiatric cases has also encouraged the retention of patients’ work by therapists who feel compelled to keep comprehensive medical records to shield themselves from liability in the event of a lawsuit. Unspoken transactions, contracts, implied agreements and consent forms have thus progressively made their way into the relationship between those artists and their therapists or assistants, disregarding any notions of copyright. The question of artists’ authorship finds itself caught in an unusually multi-faceted web of issues formed by tightening purse strings, ethical concerns and the fear of civil or criminal liability. Whilst those issues are playing out behind closed doors, the popularity of what was once called the ‘Art of the Insane’ continues to grow and open new commercial avenues. This socio-economic context exacerbates the need to devise a legal framework able to help practitioners, artists and their advocates navigate through those issues in such a way that neither this minority nor our cultural heritage suffers from the fragmentation of the legal protection available to them.

Keywords: authorship, copyright law, intellectual disabilities, art therapy and mediation

Procedia PDF Downloads 136
9532 Influence Zone of Strip Footing on Untreated and Cement Treated Sand Mat Underlain by Soft Clay (2nd reviewed)

Authors: Sharifullah Ahmed

Abstract:

Shallow foundation on soft soils without ground improvement can represent a high level of settlement. In such a case, an alternative to pile foundations may be shallow strip footings placed on a soil system in which the upper layer is untreated or cement-treated compacted sand to limit the settlement within a permissible level. This research work deals with a rigid plane-strain strip footing of 2.5m width placed on a soil consisting of untreated or cement treated sand layer underlain by homogeneous soft clay. Both the thin and thick compared the footing width was considered. The soft inorganic cohesive NC clay layer is considered undrained for plastic loading stages and drained in consolidation stages, and the sand layer is drained in all loading stages. FEM analysis was done using PLAXIS 2D Version 8.0 with a model consisting of clay deposits of 15m thickness and 18m width. The soft clay layer was modeled using the Hardening Soil Model, Soft Soil Model, Soft Soil Creep model, and the upper improvement layer was modeled using only the Hardening Soil Model. The system is considered fully saturated. The value of natural void ratio 1.2 is used. Total displacement fields of strip footing and subsoil layers in the case of Untreated and Cement treated Sand as Upper layer are presented. For Hi/B =0.6 or above, the distribution of major deformation within an upper layer and the influence zone of footing is limited in an upper layer which indicates the complete effectiveness of the upper layer in bearing the foundation effectively in case of the untreated upper layer. For Hi/B =0.3 or above, the distribution of major deformation occurred within an upper layer, and the function of footing is limited in the upper layer. This indicates the complete effectiveness of the cement-treated upper layer. Brittle behavior of cemented sand and fracture or cracks is not considered in this analysis.

Keywords: displacement, ground improvement, influence depth, PLAXIS 2D, primary and secondary settlement, sand mat, soft clay

Procedia PDF Downloads 78
9531 Challenges to Quality Primary Health Care in Saudi Arabia and Potential Improvements Implemented by Other Systems

Authors: Hilal Al Shamsi, Abdullah Almutairi

Abstract:

Introduction: As primary healthcare centres play an important role in implementing Saudi Arabia’s health strategy, this paper offers a review of publications on the quality of the country’s primary health care. With the aim of deciding on solutions for improvement, it provides an overview of healthcare quality in this context and indicates barriers to quality. Method: Using two databases, ProQuest and Scopus, data extracted from published articles were systematically analysed for determining the care quality in Saudi primary health centres and obstacles to achieving higher quality. Results: Twenty-six articles met the criteria for inclusion in this review. The components of healthcare quality were examined in terms of the access to and effectiveness of interpersonal and clinical care. Good access and effective care were identified in such areas as maternal health care and the control of epidemic diseases, whereas poor access and effectiveness of care were shown for chronic disease management programmes, referral patterns (in terms of referral letters and feedback reports), health education and interpersonal care (in terms of language barriers). Several factors were identified as barriers to high-quality care. These included problems with evidence-based practice implementation, professional development, the use of referrals to secondary care and organisational culture. Successful improvements have been implemented by other systems, such as mobile medical units, electronic referrals, online translation tools and mobile devices and their applications; these can be implemented in Saudi Arabia for improving the quality of the primary healthcare system in this country. Conclusion: The quality of primary health care in Saudi Arabia varies among the different services. To improve quality, management programmes and organisational culture must be promoted in primary health care. Professional development strategies are also needed for improving the skills and knowledge of healthcare professionals. Potential improvements can be implemented to improve the quality of the primary health system.

Keywords: quality, primary health care, Saudi Arabia, health centres, general medical

Procedia PDF Downloads 177
9530 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 283
9529 Quantum Teleportation Using W-BELL and Bell-GHZ Channels

Authors: Abhinav Pandey

Abstract:

Teleportation is the transfer of information between two particles without physically being in contact with each other. It has been around in Quantum computation and has been used in theoretical physics. Using the Entangled pair we can achieve teleportation up to 100% out of probable measurements. We introduce a 5-qubit general entanglement system using W-BELL and BELL-GHZ channel pairs and show its usefulness in teleportation. In this paper, we use these channels to achieve teleportation probabilistically conventionally through nonteleporting channels, which has never been achieved before. In this paper, we compare and determine which channel is better in terms of probabilistic results of teleportation of single qubits using W-Bell and Bell-GHZ channels.

Keywords: entanglement, teleportation, no cloning theorem, quantum mechanics, probability

Procedia PDF Downloads 30
9528 Fire Resilient Cities: The Impact of Fire Regulations, Technological and Community Resilience

Authors: Fanny Guay

Abstract:

Building resilience, sustainable buildings, urbanization, climate change, resilient cities, are just a few examples of where the focus of research has been in the last few years. It is obvious that there is a need to rethink how we are building our cities and how we are renovating our existing buildings. However, the question remaining is how can we assure that we are building sustainable yet resilient cities? There are many aspects one can touch upon when discussing resilience in cities, but after the event of Grenfell in June 2017, it has become clear that fire resilience must be a priority. We define resilience as a holistic approach including communities, society and systems, focusing not only on resisting the effects of a disaster, but also how it will cope and recover from it. Cities are an example of such a system, where components such as buildings have an important role to play. A building on fire will have an impact on the community, the economy, the environment, and so the entire system. Therefore, we believe that fire and resilience go hand in hand when we discuss building resilient cities. This article aims at discussing the current state of the concept of fire resilience and suggests actions to support the built of more fire resilient buildings. Using the case of Grenfell and the fire safety regulations in the UK, we will briefly compare the fire regulations in other European countries, more precisely France, Germany and Denmark, to underline the difference and make some suggestions to increase fire resilience via regulation. For this research, we will also include other types of resilience such as technological resilience, discussing the structure of buildings itself, as well as community resilience, considering the role of communities in building resilience. Our findings demonstrate that to increase fire resilience, amending existing regulations might be necessary, for example, how we performed reaction to fire tests and how we classify building products. However, as we are looking at national regulations, we are only able to make general suggestions for improvement. Another finding of this research is that the capacity of the community to recover and adapt after a fire is also an essential factor. Fundamentally, fire resilience, technological resilience and community resilience are closely connected. Building resilient cities is not only about sustainable buildings or energy efficiency; it is about assuring that all the aspects of resilience are included when building or renovating buildings. We must ask ourselves questions as: Who are the users of this building? Where is the building located? What are the components of the building, how was it designed and which construction products have been used? If we want to have resilient cities, we must answer these basic questions and assure that basic factors such as fire resilience are included in our assessment.

Keywords: buildings, cities, fire, resilience

Procedia PDF Downloads 149
9527 Identifying, Reporting and Preventing Medical Errors Among Nurses Working in Critical Care Units At Kenyatta National Hospital, Kenya: Closing the Gap Between Attitude and Practice

Authors: Jared Abuga, Wesley Too

Abstract:

Medical error is the third leading cause of death in US, with approximately 98,000 deaths occurring every year as a result of medical errors. The world financial burden of medication errors is roughly USD 42 billion. Medication errors may lead to at least one death daily and injure roughly 1.3 million people every year. Medical error reporting is essential in creating a culture of accountability in our healthcare system. Studies have shown that attitudes and practice of healthcare workers in reporting medical errors showed that the major factors in under-reporting of errors included work stress and fear of medico-legal consequences due to the disclosure of error. Further, the majority believed that increase in reporting medical errors would contribute to a better system. Most hospitals depend on nurses to discover medication errors because they are considered to be the sources of these errors, as contributors or mere observers, consequently, the nurse’s perception of medication errors and what needs to be done is a vital feature to reducing incidences of medication errors. We sought to explore knowledge among nurses on medical errors and factors affecting or hindering reporting of medical errors among nurses working at the emergency unit, KNH. Critical care nurses are faced with many barriers to completing incident reports on medication errors. One of these barriers which contribute to underreporting is a lack of education and/or knowledge regarding medication errors and the reporting process. This study, therefore, sought to determine the availability and the use of reporting systems for medical errors in critical care unity. It also sought to establish nurses’ perception regarding medical errors and reporting and document factors facilitating timely identification and reporting of medical errors in critical care settings. Methods: The study used cross-section study design to collect data from 76 critical care nurses from Kenyatta Teaching & Research National Referral Hospital, Kenya. Data analysis and results is ongoing. By October 2022, we will have analysis, results, discussions, and recommendations of the study for purposes of the conference in 2023

Keywords: errors, medical, kenya, nurses, safety

Procedia PDF Downloads 226
9526 The Cost-Effectiveness of Pancreatic Surgical Cancer Care in the US vs. the European Union: Results of a Review of the Peer-Reviewed Scientific Literature

Authors: Shannon Hearney, Jeffrey Hoch

Abstract:

While all cancers are costly to treat, pancreatic cancer is a notoriously costly and deadly form of cancer. Across the world there are a variety of treatment centers ranging from small clinics to large, high-volume hospitals as well as differing structures of payment and access. It has been noted that centers that treat a high volume of pancreatic cancer patients have higher quality of care, it is unclear if that care is cost-effective. In the US there is no clear consensus on the cost-effectiveness of high-volume centers for the surgical care of pancreatic cancer. Other European countries, like Finland and Italy have shown that high-volume centers have lower mortality rates and can have lower costs, there however, is still a gap in knowledge about these centers cost-effectiveness globally. This paper seeks to review the current literature in Europe and the US to gain a better understanding of the state of high-volume pancreatic surgical centers cost-effectiveness while considering the contextual differences in health system structure. A review of major reference databases such as Medline, Embase and PubMed will be conducted for cost-effectiveness studies on the surgical treatment of pancreatic cancer at high-volume centers. Possible MeSH terms to be included, but not limited to, are: “pancreatic cancer”, “cost analysis”, “cost-effectiveness”, “economic evaluation”, “pancreatic neoplasms”, “surgical”, “Europe” “socialized medicine”, “privatized medicine”, “for-profit”, and “high-volume”. Studies must also have been available in the English language. This review will encompass European scientific literature, as well as those in the US. Based on our preliminary findings, we anticipate high-volume hospitals to provide better care at greater costs. We anticipate that high-volume hospitals may be cost-effective in different contexts depending on the national structure of a healthcare system. Countries with more centralized and socialized healthcare may yield results that are more cost-effective. High-volume centers may differ in their cost-effectiveness of the surgical care of pancreatic cancer internationally especially when comparing those in the United States to others throughout Europe.

Keywords: cost-effectiveness analysis, economic evaluation, pancreatic cancer, scientific literature review

Procedia PDF Downloads 77
9525 The Relationship between Creative Imagination and Curriculum

Authors: Faride Hashemiannejad, Shima Oloomi

Abstract:

Imagination is one of the important elements of creative thinking which as a skill needs attention by the educational system. Although most students learn reading, writing, and arithmetic skills well, they lack high level thinking skills like creative thinking. Therefore, in the information age and in the beginning of entry to knowledge-based society, the educational system needs to think over its goals and mission, and concentrate on creativity-based curriculum. From among curriculum elements-goals, content, method and evaluation “method” is a major domain whose reform can pave the way for fostering imagination and creativity. The purpose of this study was examining the relationship between creativity development and curriculum. Research questions were: (1) is there a relationship between the cognitive-emotional structure of the classroom and creativity development? (2) Is there a relationship between the environmental-social structure of the classroom and creativity development? (3) Is there a relationship between the thinking structure of the classroom and creativity development? (4) Is there a relationship between the physical structure of the classroom and creativity development? (5) Is there a relationship between the instructional structure of the classroom and creativity development? Method: This research is a applied research and the research method is Correlational research. Participants: The total number of participants in this study included 894 students from High school through 11th grade from seven schools of seven zones in Mashad city. Sampling Plan: Sampling was selected based on Random Multi State. Measurement: The dependent measure in this study was: (a) the Test of Creative Thinking, (b) The researcher-made questionnaire includes five fragments, cognitive, emotional structure, environmental social structure, thinking structure, physical structure, and instructional structure. The Results Show: There was significant relationship between the cognitive-emotional structure of the classroom and student’s creativity development (sig=0.139). There was significant relationship between the environmental-social structure of the classroom and student’s creativity development (sig=0.006). There was significant relationship between the thinking structure of the classroom and student’s creativity development (sig=0.004). There was not significant relationship between the physical structure of the classroom and student’s creativity development (sig=0.215). There was significant relationship between the instructional structure of the classroom and student’s creativity development (sig=0.003). These findings denote if students feel secure, calm and confident, they can experience creative learning. Also the quality of coping with students’ questions, imaginations and risks can influence on their creativity development.

Keywords: imagination, creativity, curriculum, bioinformatics, biomedicine

Procedia PDF Downloads 466
9524 Pursuing Knowledge Society Excellence: Knowledge Management and Open Innovation Platforms for Research, Industry and Business Collaboration in Singapore

Authors: Irina-Emily Hansen, Ola Jon Mork

Abstract:

The European economic growth strategy and supporting it framework for research and innovation highlight the importance of nurturing new open innovation in order to strengthen Europe’s competitiveness. One of the main approaches to enhance innovation in European society is the Triple Helix model that centres on science- industry collaboration where the universities are assigned the managerial role. In spite of the defined collaboration strategy, the collaboration between academics and in-dustry in Europe has still many challenges. Many of them are explained by culture difference: academic culture aims towards scientific knowledge, while businesses are oriented towards pro-duction and profitable results; also execution of collaborative projects is seen differently by part-ners involved. That proves that traditional management strategies applied to collaboration between researchers and businesses are not effective. There is a need for dynamic strategies that can support the interaction between researchers and industry intensifying knowledge co-creation and contributing to development of national innovation system (NIS) by incorporating individual, organizational and inter-organizational learning. In order to find a good subject to follow, the researchers of a given paper have investigated one of the most rapidly developing knowledge-based, innovation society, Singapore. Singapore does not dispose much land- or sea- resources that normally provide income for any country. Therefore, Singapore was forced to think differently and build society on resources that are available: talented people and knowledge. Singapore has during the last twenty years developed attracting high rated university camps, research institutions and leading industrial companies from all over the world. This article elucidates and elaborates Singapore’s national innovation strategies from Knowledge Management perspective. The research is done on the variety of organizations that enable and support knowledge development in this state: governmental research and development (R&D) centers in universities, private talent incubators for entrepreneurs, and industrial companies with own R&D departments. The research methods are based on presentations, documents, and visits at a number of universities, research institutes, innovation parks, governmental institutions, industrial companies and innovation exhibitions in Singapore. In addition, a literature review of science articles is made regarding the topic. The first finding is that objectives of collaboration between researchers, entrepreneurs and industry in Singapore correspond primary goals of the state: knowledge- and economy growth. There are common objectives for all stakeholders on all national levels. The second finding is that Singapore has enabled system on a national level that supports innovation the entire way from fostering or capturing the new knowledge, providing knowledge exchange and co-creation to application of it in real-life. The conclusion is that innovation means not only new idea, but also the enabling mechanism for its execution and the marked-oriented approach in order that new knowledge can be absorbed in society. The future research can be done with regards to application of Singapore knowledge management strategy in innovation to European countries.

Keywords: knowledge management strategy, national innovation system, research industry and business collaboration, knowledge enabling

Procedia PDF Downloads 168
9523 Unsupervised Part-of-Speech Tagging for Amharic Using K-Means Clustering

Authors: Zelalem Fantahun

Abstract:

Part-of-speech tagging is the process of assigning a part-of-speech or other lexical class marker to each word into naturally occurring text. Part-of-speech tagging is the most fundamental and basic task almost in all natural language processing. In natural language processing, the problem of providing large amount of manually annotated data is a knowledge acquisition bottleneck. Since, Amharic is one of under-resourced language, the availability of tagged corpus is the bottleneck problem for natural language processing especially for POS tagging. A promising direction to tackle this problem is to provide a system that does not require manually tagged data. In unsupervised learning, the learner is not provided with classifications. Unsupervised algorithms seek out similarity between pieces of data in order to determine whether they can be characterized as forming a group. This paper explicates the development of unsupervised part-of-speech tagger using K-Means clustering for Amharic language since large amount of data is produced in day-to-day activities. In the development of the tagger, the following procedures are followed. First, the unlabeled data (raw text) is divided into 10 folds and tokenization phase takes place; at this level, the raw text is chunked at sentence level and then into words. The second phase is feature extraction which includes word frequency, syntactic and morphological features of a word. The third phase is clustering. Among different clustering algorithms, K-means is selected and implemented in this study that brings group of similar words together. The fourth phase is mapping, which deals with looking at each cluster carefully and the most common tag is assigned to a group. This study finds out two features that are capable of distinguishing one part-of-speech from others these are morphological feature and positional information and show that it is possible to use unsupervised learning for Amharic POS tagging. In order to increase performance of the unsupervised part-of-speech tagger, there is a need to incorporate other features that are not included in this study, such as semantic related information. Finally, based on experimental result, the performance of the system achieves a maximum of 81% accuracy.

Keywords: POS tagging, Amharic, unsupervised learning, k-means

Procedia PDF Downloads 426
9522 Contact Zones and Fashion Hubs: From Circular Economy to Circular Neighbourhoods

Authors: Tiziana Ferrero-Regis, Marissa Lindquist

Abstract:

Circular Economy (CE) is increasingly seen as the reorganisation of production and consumption, and cities are acknowledged as the sources of many ecological and social problems; at the same time, they can be re-imagined through an ecologically and socially resilient future. The concept of the CE has received pointed critiques for its techno-deterministic orientation, focus on science and transformation by the policy. At the heart of our local re-imagining of the CE into circularity through contact zones there is the acknowledgment of collective, spontaneous and shared imaginations of alternative and sustainable futures through the creation of networks of community initiatives that are transformative, creating opportunities that simultaneously make cities rich and enrich humans. This paper presents a mapping project of the fashion and textile ecosystem in Brisbane, Queensland, Australia. Brisbane is currently the most aspirational city in Australia, as its population growth rate is the highest in the country. Yet, Brisbane is considered the least “fashion city” in the country. In contrast, the project revealed a greatly enhanced picture of distinct fashion and textile clusters across greater Brisbane and the adjacency of key services that may act to consolidate CE community contact zones. Clusters to the north of Brisbane and several locales to the south are zones of a greater mix between public/social amenities, walkable zones and local transport networks with educational precincts, community hubs, concentration of small enterprises, designers, artisans and waste recovery centers that will help to establish knowledge of key infrastructure networks that will support enmeshing these zones together. The paper presents two case studies of independent designers who work on new and re-designed clothing through recovering pre-consumer textiles and that operate from within creative precincts. The first case is designer Nelson Molloy, who recently returned to the inner city suburb of West End with their Chasing Zero Design project. The area was known in the 1980s and 1990s for its alternative lifestyle with creative independent production, thrifty clothing shops, alternative fashion and a socialist agenda. After 30 years of progressive gentrification of the suburb, which has dislocated many of the artists, designers and artisans, West End is seeing the return and amplification of clusters of artisans, artists, designers and architects. The other case study is Practice Studio, located in a new zone of creative growth, Bowen Hills, north of the CBD. Practice Studio combines retail with a workroom, offers repair and remaking services, becoming a point of reference for young and emerging Australian designers and artists. The paper demonstrates the spatial politics of the CE and the way in which new cultural capital is produced thanks to cultural specificities and resources. It argues for the recognition of contact zones that are created by local actors, communities and knowledge networks, whose grass-roots agency is fundamental for the co-production of CE’s systems of local governance.

Keywords: contact zones, circular citities, fashion and textiles, circular neighbourhoods, australia

Procedia PDF Downloads 80
9521 Formulating a Definition of Hate Speech: From Divergence to Convergence

Authors: Avitus A. Agbor

Abstract:

Numerous incidents, ranging from trivial to catastrophic, do come to mind when one reflects on hate. The victims of these belong to specific identifiable groups within communities. These experiences evoke discussions on Islamophobia, xenophobia, homophobia, anti-Semitism, racism, ethnic hatred, atheism, and other brutal forms of bigotry. Common to all these is an invisible but portent force that drives all of them: hatred. Such hatred is usually fueled by a profound degree of intolerance (to diversity) and the zeal to impose on others their beliefs and practices which they consider to be the conventional norm. More importantly, the perpetuation of these hateful acts is the unfortunate outcome of an overplay of invectives and hate speech which, to a greater extent, cannot be divorced from hate. From a legal perspective, acknowledging the existence of an undeniable link between hate speech and hate is quite easy. However, both within and without legal scholarship, the notion of “hate speech” remains a conundrum: a phrase that is quite easily explained through experiences than propounding a watertight definition that captures the entire essence and nature of what it is. The problem is further compounded by a few factors: first, within the international human rights framework, the notion of hate speech is not used. In limiting the right to freedom of expression, the ICCPR simply excludes specific kinds of speeches (but does not refer to them as hate speech). Regional human rights instruments are not so different, except for the subsequent developments that took place in the European Union in which the notion has been carefully delineated, and now a much clearer picture of what constitutes hate speech is provided. The legal architecture in domestic legal systems clearly shows differences in approaches and regulation: making it more difficult. In short, what may be hate speech in one legal system may very well be acceptable legal speech in another legal system. Lastly, the cornucopia of academic voices on the issue of hate speech exude the divergence thereon. Yet, in the absence of a well-formulated and universally acceptable definition, it is important to consider how hate speech can be defined. Taking an evidence-based approach, this research looks into the issue of defining hate speech in legal scholarship and how and why such a formulation is of critical importance in the prohibition and prosecution of hate speech.

Keywords: hate speech, international human rights law, international criminal law, freedom of expression

Procedia PDF Downloads 53
9520 Using Business Intelligence Capabilities to Improve the Quality of Decision-Making: A Case Study of Mellat Bank

Authors: Jalal Haghighat Monfared, Zahra Akbari

Abstract:

Today, business executives need to have useful information to make better decisions. Banks have also been using information tools so that they can direct the decision-making process in order to achieve their desired goals by rapidly extracting information from sources with the help of business intelligence. The research seeks to investigate whether there is a relationship between the quality of decision making and the business intelligence capabilities of Mellat Bank. Each of the factors studied is divided into several components, and these and their relationships are measured by a questionnaire. The statistical population of this study consists of all managers and experts of Mellat Bank's General Departments (including 190 people) who use commercial intelligence reports. The sample size of this study was 123 randomly determined by statistical method. In this research, relevant statistical inference has been used for data analysis and hypothesis testing. In the first stage, using the Kolmogorov-Smirnov test, the normalization of the data was investigated and in the next stage, the construct validity of both variables and their resulting indexes were verified using confirmatory factor analysis. Finally, using the structural equation modeling and Pearson's correlation coefficient, the research hypotheses were tested. The results confirmed the existence of a positive relationship between decision quality and business intelligence capabilities in Mellat Bank. Among the various capabilities, including data quality, correlation with other systems, user access, flexibility and risk management support, the flexibility of the business intelligence system was the most correlated with the dependent variable of the present research. This shows that it is necessary for Mellat Bank to pay more attention to choose the required business intelligence systems with high flexibility in terms of the ability to submit custom formatted reports. Subsequently, the quality of data on business intelligence systems showed the strongest relationship with quality of decision making. Therefore, improving the quality of data, including the source of data internally or externally, the type of data in quantitative or qualitative terms, the credibility of the data and perceptions of who uses the business intelligence system, improves the quality of decision making in Mellat Bank.

Keywords: business intelligence, business intelligence capability, decision making, decision quality

Procedia PDF Downloads 101
9519 Risk-Sharing Financing of Islamic Banks: Better Shielded against Interest Rate Risk

Authors: Mirzet SeHo, Alaa Alaabed, Mansur Masih

Abstract:

In theory, risk-sharing-based financing (RSF) is considered a corner stone of Islamic finance. It is argued to render Islamic banks more resilient to shocks. In practice, however, this feature of Islamic financial products is almost negligible. Instead, debt-based instruments, with conventional like features, have overwhelmed the nascent industry. In addition, the framework of present-day economic, regulatory and financial reality inevitably exposes Islamic banks in dual banking systems to problems of conventional banks. This includes, but is not limited to, interest rate risk. Empirical evidence has, thus far, confirmed such exposures, despite Islamic banks’ interest-free operations. This study applies system GMM in modeling the determinants of RSF, and finds that RSF is insensitive to changes in interest rates. Hence, our results provide support to the “stability” view of risk-sharing-based financing. This suggests RSF as the way forward for risk management at Islamic banks, in the absence of widely acceptable Shariah compliant hedging instruments. Further support to the stability view is given by evidence of counter-cyclicality. Unlike debt-based lending that inflates artificial asset bubbles through credit expansion during the upswing of business cycles, RSF is negatively related to GDP growth. Our results also imply a significantly strong relationship between risk-sharing deposits and RSF. However, the pass-through of these deposits to RSF is economically low. Only about 40% of risk-sharing deposits are channeled to risk-sharing financing. This raises questions on the validity of the industry’s claim that depositors accustomed to conventional banking shun away from risk sharing and signals potential for better balance sheet management at Islamic banks. Overall, our findings suggest that, on the one hand, Islamic banks can gain ‘independence’ from conventional banks and interest rates through risk-sharing products, the potential for which is enormous. On the other hand, RSF could enable policy makers to improve systemic stability and restrain excessive credit expansion through its countercyclical features.

Keywords: Islamic banks, risk-sharing, financing, interest rate, dynamic system GMM

Procedia PDF Downloads 306
9518 Informing, Enabling and Inspiring Social Innovation by Geographic Systems Mapping: A Case Study in Workforce Development

Authors: Cassandra A. Skinner, Linda R. Chamberlain

Abstract:

The nonprofit and public sectors are increasingly turning to Geographic Information Systems for data visualizations which can better inform programmatic and policy decisions. Additionally, the private and nonprofit sectors are turning to systems mapping to better understand the ecosystems within which they operate. This study explores the potential which combining these data visualization methods—a method which is called geographic systems mapping—to create an exhaustive and comprehensive understanding of a social problem’s ecosystem may have in social innovation efforts. Researchers with Grand Valley State University collaborated with Talent 2025 of West Michigan to conduct a mixed-methods research study to paint a comprehensive picture of the workforce development ecosystem in West Michigan. Using semi-structured interviewing, observation, secondary research, and quantitative analysis, data were compiled on workforce development organizations’ locations, programming, metrics for success, partnerships, funding sources, and service language. To best visualize and disseminate the data, a geographic system map was created which identifies programmatic, operational, and geographic gaps in workforce development services of West Michigan. By combining geographic and systems mapping methods, the geographic system map provides insight into the cross-sector relationships, collaboration, and competition which exists among and between workforce development organizations. These insights identify opportunities for and constraints around cross-sectoral social innovation in the West Michigan workforce development ecosystem. This paper will discuss the process utilized to prepare the geographic systems map, explain the results and outcomes, and demonstrate how geographic systems mapping illuminated the needs of the community and opportunities for social innovation. As complicated social problems like unemployment often require cross-sectoral and multi-stakeholder solutions, there is potential for geographic systems mapping to be a tool which informs, enables, and inspires these solutions.

Keywords: cross-sector collaboration, data visualization, geographic systems mapping, social innovation, workforce development

Procedia PDF Downloads 279
9517 Introduction of an Approach of Complex Virtual Devices to Achieve Device Interoperability in Smart Building Systems

Authors: Thomas Meier

Abstract:

One of the major challenges for sustainable smart building systems is to support device interoperability, i.e. connecting sensor or actuator devices from different vendors, and present their functionality to the external applications. Furthermore, smart building systems are supposed to connect with devices that are not available yet, i.e. devices that become available on the market sometime later. It is of vital importance that a sustainable smart building platform provides an appropriate external interface that can be leveraged by external applications and smart services. An external platform interface must be stable and independent of specific devices and should support flexible and scalable usage scenarios. A typical approach applied in smart home systems is based on a generic device interface used within the smart building platform. Device functions, even of rather complex devices, are mapped to that generic base type interface by means of specific device drivers. Our new approach, presented in this work, extends that approach by using the smart building system’s rule engine to create complex virtual devices that can represent the most diverse properties of real devices. We examined and evaluated both approaches by means of a practical case study using a smart building system that we have developed. We show that the solution we present allows the highest degree of flexibility without affecting external application interface stability and scalability. In contrast to other systems our approach supports complex virtual device configuration on application layer (e.g. by administration users) instead of device configuration at platform layer (e.g. platform operators). Based on our work, we can show that our approach supports almost arbitrarily flexible use case scenarios without affecting the external application interface stability. However, the cost of this approach is additional appropriate configuration overhead and additional resource consumption at the IoT platform level that must be considered by platform operators. We conclude that the concept of complex virtual devices presented in this work can be applied to improve the usability and device interoperability of sustainable intelligent building systems significantly.

Keywords: Internet of Things, smart building, device interoperability, device integration, smart home

Procedia PDF Downloads 252
9516 Kalman Filter for Bilinear Systems with Application

Authors: Abdullah E. Al-Mazrooei

Abstract:

In this paper, we present a new kind of the bilinear systems in the form of state space model. The evolution of this system depends on the product of state vector by its self. The well known Lotak Volterra and Lorenz models are special cases of this new model. We also present here a generalization of Kalman filter which is suitable to work with the new bilinear model. An application to real measurements is introduced to illustrate the efficiency of the proposed algorithm.

Keywords: bilinear systems, state space model, Kalman filter, application, models

Procedia PDF Downloads 414
9515 Role of Indigenous Women in Securing Sustainable Livelihoods in Western Himalayan Region, India

Authors: Haresh Sharma, Jaimini Luharia

Abstract:

The ecology in the Western Himalayan region transforms with the change in altitude. This change is observed in terms of topography, species of flora and fauna and the quality of the soil. The current study focuses on women of indigenous communities of Pangi Valley, which is located in the state of Himachal Pradesh, India. The valley is bifurcated into three different areas –Saichu, Hudan Bhatori, and Sural Bhatori valleys. It is one of the most remote, rugged and difficult to access tribal regions of Chamba district. The altitude of the valley ranges from 2,000 m to 6,000 m above sea level. The Pangi valley is inhabited by ‘Pangwals’ and ‘Bhots’ tribes of the Himalayas who speak their local tribal language called’ Pangwali’. The valley is cut-off from the mainland due to heavy snow and lack of proper roads during peak winters. Due to difficult geographical location, the daily lives of the people are constantly challenged, and they are most of the times deprived of benefits targeted through government programs. However, the indigenous communities earn their livelihood through livestock and forest-based produce while some of them migrate to nearby places for better work. The current study involves snowball sampling methodology for data collection along with in-depth interviews of women members of Self-Help Groups and women farmers. The findings reveal that the lives of these indigenous communities largely depend on forest-based products. So, it creates all the more significance of enhancing, maintaining, and consuming natural resources sustainably. Under such circumstances, the women of the community play a significant role of guardians in conservation and protection of the forests. They are the custodians of traditional knowledge of environment conservation practices that have been followed for many years in the region. The present study also sought to establish a relationship between some of the development initiatives undertaken by the women in the valley that stimulate sustainable mountain economy and conservation practices. These initiatives include cultivation of products like hazelnut, ‘Gucchi’ rare quality mushroom, medicinal plants exclusively found in the region, thereby promoting long term sustainable conservation of agro-biodiversity of the Western Himalayan region. The measures taken by the community women are commendable as they ensure access and distribution of natural resources as well as manage them for future generations. Apart from this, the tribal women have actively formed Self-Help Groups promoting financial inclusion through various activities that augment ownership and accountability towards the overall development of the communities. But, the results also suggest that there’s not enough recognition given to women’s role in forests conservation practices due to several local socio-political reasons. There are not enough research studies done on communities of Pangi Valley due to inaccessibility created out of lack of proper roads and other resources. Also, there emerged a need to concretize indigenous and traditional knowledge of conservation practices followed by women in the community.

Keywords: forest conservation, indigenous community women, sustainable livelihoods, sustainable development, poverty alleviation, Western Himalayas

Procedia PDF Downloads 108
9514 Teaching Linguistic Humour Research Theories: Egyptian Higher Education EFL Literature Classes

Authors: O. F. Elkommos

Abstract:

“Humour studies” is an interdisciplinary research area that is relatively recent. It interests researchers from the disciplines of psychology, sociology, medicine, nursing, in the work place, gender studies, among others, and certainly teaching, language learning, linguistics, and literature. Linguistic theories of humour research are numerous; some of which are of interest to the present study. In spite of the fact that humour courses are now taught in universities around the world in the Egyptian context it is not included. The purpose of the present study is two-fold: to review the state of arts and to show how linguistic theories of humour can be possibly used as an art and craft of teaching and of learning in EFL literature classes. In the present study linguistic theories of humour were applied to selected literary texts to interpret humour as an intrinsic artistic communicative competence challenge. Humour in the area of linguistics was seen as a fifth component of communicative competence of the second language leaner. In literature it was studied as satire, irony, wit, or comedy. Linguistic theories of humour now describe its linguistic structure, mechanism, function, and linguistic deviance. Semantic Script Theory of Verbal Humor (SSTH), General Theory of Verbal Humor (GTVH), Audience Based Theory of Humor (ABTH), and their extensions and subcategories as well as the pragmatic perspective were employed in the analyses. This research analysed the linguistic semantic structure of humour, its mechanism, and how the audience reader (teacher or learner) becomes an interactive interpreter of the humour. This promotes humour competence together with the linguistic, social, cultural, and discourse communicative competence. Studying humour as part of the literary texts and the perception of its function in the work also brings its positive association in class for educational purposes. Humour is by default a provoking/laughter-generated device. Incongruity recognition, perception and resolving it, is a cognitive mastery. This cognitive process involves a humour experience that lightens up the classroom and the mind. It establishes connections necessary for the learning process. In this context the study examined selected narratives to exemplify the application of the theories. It is, therefore, recommended that the theories would be taught and applied to literary texts for a better understanding of the language. Students will then develop their language competence. Teachers in EFL/ESL classes will teach the theories, assist students apply them and interpret text and in the process will also use humour. This is thus easing students' acquisition of the second language, making the classroom an enjoyable, cheerful, self-assuring, and self-illuminating experience for both themselves and their students. It is further recommended that courses of humour research studies should become an integral part of higher education curricula in Egypt.

Keywords: ABTH, deviance, disjuncture, episodic, GTVH, humour competence, humour comprehension, humour in the classroom, humour in the literary texts, humour research linguistic theories, incongruity-resolution, isotopy-disjunction, jab line, longer text joke, narrative story line (macro-micro), punch line, six knowledge resource, SSTH, stacks, strands, teaching linguistics, teaching literature, TEFL, TESL

Procedia PDF Downloads 287
9513 Cotton Transplantation as a Practice to Escape Infection with Some Soil-Borne Pathogens

Authors: E. M. H. Maggie, M. N. A. Nazmey, M. A. Abdel-Sattar, S. A. Saied

Abstract:

A successful trial of transplanting cotton is reported. Seeds grown in trays for 4-5 weeks in an easily prepared supporting medium such as peat moss or similar plant waste are tried. Careful transplanting of seedlings, with root system as intact as possible, is being made in the permanent field. The practice reduced damping-off incidence rate and allowed full winter crop revenues. Further work is needed to evaluate certain parameters such as growth curve, flowering curve, and yield at economic bases.

Keywords: cotton, transplanting cotton, damping-off diseases, environment sciences

Procedia PDF Downloads 344
9512 Perfomance of PAPR Reduction in OFDM System for Wireless Communications

Authors: Alcardo Alex Barakabitze, Saddam Aziz, Muhammad Zubair

Abstract:

The Orthogonal Frequency Division Multiplexing (OFDM) is a special form of multicarrier transmission that splits the total transmission bandwidth into a number of orthogonal and non-overlapping subcarriers and transmit the collection of bits called symbols in parallel using these subcarriers. In this paper, we explore the Peak to Average Power Reduction (PAPR) problem in OFDM systems. We provide the performance analysis of CCDF and BER through MATLAB simulations.

Keywords: bit error ratio (BER), OFDM, peak to average power reduction (PAPR), sub-carriers

Procedia PDF Downloads 524
9511 The Effects of Billboard Content and Visible Distance on Driver Behavior

Authors: Arsalan Hassan Pour, Mansoureh Jeihani, Samira Ahangari

Abstract:

Distracted driving has been one of the most integral concerns surrounding our daily use of vehicles since the invention of the automobile. While much attention has been recently given to cell phones related distraction, commercial billboards along roads are also candidates for drivers' visual and cognitive distractions, as they may take drivers’ eyes from the road and their minds off the driving task to see, perceive and think about the billboard’s content. Using a driving simulator and a head-mounted eye-tracking system, speed change, acceleration, deceleration, throttle response, collision, lane changing, and offset from the center of the lane data along with gaze fixation duration and frequency data were collected in this study. Some 92 participants from a fairly diverse sociodemographic background drove on a simulated freeway in Baltimore, Maryland area and were exposed to three different billboards to investigate the effects of billboards on drivers’ behavior. Participants glanced at the billboards several times with different frequencies, the maximum of which occurred on the billboard with the highest cognitive load. About 74% of the participants didn’t look at billboards for more than two seconds at each glance except for the billboard with a short visible area. Analysis of variance (ANOVA) was performed to find the variations in driving behavior when they are invisible, readable, and post billboards area. The results show a slight difference in speed, throttle, brake, steering velocity, and lane changing, among different areas. Brake force and deviation from the center of the lane increased in the readable area in comparison with the visible area, and speed increased right after each billboard. The results indicated that billboards have a significant effect on driving performance and visual attention based on their content and visibility status. Generalized linear model (GLM) analysis showed no connection between participants’ age and driving experience with gaze duration. However, the visible distance of the billboard, gender, and billboard content had a significant effect on gaze duration.

Keywords: ANOVA, billboards, distracted driving, drivers' behavior, driving simulator, eye-Tracking system, GLM

Procedia PDF Downloads 117
9510 The Development of a Digitally Connected Factory Architecture to Enable Product Lifecycle Management for the Assembly of Aerostructures

Authors: Nicky Wilson, Graeme Ralph

Abstract:

Legacy aerostructure assembly is defined by large components, low build rates, and manual assembly methods. With an increasing demand for commercial aircraft and emerging markets such as the eVTOL (electric vertical take-off and landing) market, current methods of manufacturing are not capable of efficiently hitting these higher-rate demands. This project will look at how legacy manufacturing processes can be rate enabled by taking a holistic view of data usage, focusing on how data can be collected to enable fully integrated digital factories and supply chains. The study will focus on how data is flowed both up and down the supply chain to create a digital thread specific to each part and assembly while enabling machine learning through real-time, closed-loop feedback systems. The study will also develop a bespoke architecture to enable connectivity both within the factory and the wider PLM (product lifecycle management) system, moving away from traditional point-to-point systems used to connect IO devices to a hub and spoke architecture that will exploit report-by-exception principles. This paper outlines the key issues facing legacy aircraft manufacturers, focusing on what future manufacturing will look like from adopting Industry 4 principles. The research also defines the data architecture of a PLM system to enable the transfer and control of a digital thread within the supply chain and proposes a standardised communications protocol to enable a scalable solution to connect IO devices within a production environment. This research comes at a critical time for aerospace manufacturers, who are seeing a shift towards the integration of digital technologies within legacy production environments, while also seeing build rates continue to grow. It is vital that manufacturing processes become more efficient in order to meet these demands while also securing future work for many manufacturers.

Keywords: Industry 4, digital transformation, IoT, PLM, automated assembly, connected factories

Procedia PDF Downloads 63
9509 Rendering Cognition Based Learning in Coherence with Development within the Context of PostgreSQL

Authors: Manuela Nayantara Jeyaraj, Senuri Sucharitharathna, Chathurika Senarath, Yasanthy Kanagaraj, Indraka Udayakumara

Abstract:

PostgreSQL is an Object Relational Database Management System (ORDBMS) that has been in existence for a while. Despite the superior features that it wraps and packages to manage database and data, the database community has not fully realized the importance and advantages of PostgreSQL. Hence, this research tends to focus on provisioning a better environment of development for PostgreSQL in order to induce the utilization and elucidate the importance of PostgreSQL. PostgreSQL is also known to be the world’s most elementary SQL-compliant open source ORDBMS. But, users have not yet resolved to PostgreSQL due to the facts that it is still under the layers and the complexity of its persistent textual environment for an introductory user. Simply stating this, there is a dire need to explicate an easy way of making the users comprehend the procedure and standards with which databases are created, tables and the relationships among them, manipulating queries and their flow based on conditions in PostgreSQL to help the community resolve to PostgreSQL at an augmented rate. Hence, this research under development within the context tends to initially identify the dominant features provided by PostgreSQL over its competitors. Following the identified merits, an analysis on why the database community holds a hesitance in migrating to PostgreSQL’s environment will be carried out. These will be modulated and tailored based on the scope and the constraints discovered. The resultant of the research proposes a system that will serve as a designing platform as well as a learning tool that will provide an interactive method of learning via a visual editor mode and incorporate a textual editor for well-versed users. The study is based on conjuring viable solutions that analyze a user’s cognitive perception in comprehending human computer interfaces and the behavioural processing of design elements. By providing a visually draggable and manipulative environment to work with Postgresql databases and table queries, it is expected to highlight the elementary features displayed by Postgresql over any other existent systems in order to grasp and disseminate the importance and simplicity offered by this to a hesitant user.

Keywords: cognition, database, PostgreSQL, text-editor, visual-editor

Procedia PDF Downloads 262