Search results for: parking monitoring system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19189

Search results for: parking monitoring system

9379 Study of the Influence of Refractory Nitride Additives on Hydrogen Storage Properties of Ti6Al4V-Based Materials Produced by Spark Plasma Sintering

Authors: John Olorunfemi Abe, Olawale Muhammed Popoola, Abimbola Patricia Idowu Popoola

Abstract:

Hydrogen is an appealing alternative to fossil fuels because of its abundance, low weight, high energy density, and relative lack of contaminants. However, its low density presents a number of storage challenges. Therefore, this work studies the influence of refractory nitride additives consisting of 5 wt. % each of hexagonal boron nitride (h-BN), titanium nitride (TiN), and aluminum nitride (AlN) on hydrogen storage and electrochemical characteristics of Ti6Al4V-based materials produced by spark plasma sintering. The microstructure and phase constituents of the sintered materials were characterized using scanning electron microscopy (in conjunction with energy-dispersive spectroscopy) and X-ray diffraction, respectively. Pressure-composition-temperature (PCT) measurements were used to assess the hydrogen absorption/desorption behavior, kinetics, and storage capacities of the sintered materials, respectively. The pure Ti6Al4V alloy displayed a two-phase (α+β) microstructure, while the modified composites exhibited apparent microstructural modifications with the appearance of nitride-rich secondary phases. It is found that the diffusion process controls the kinetics of the hydrogen absorption. Thus, a faster rate of hydrogen absorption at elevated temperatures ensued. The additives acted as catalysts, lowered the activation energy and accelerated the rate of hydrogen sorption in the composites relative to the monolithic alloy. Ti6Al4V-5 wt. % h-BN appears to be the most promising candidate for hydrogen storage (2.28 wt. %), followed by Ti6Al4V-5 wt. % TiN (2.09 wt. %), whereas Ti6Al4V-5 wt. % AlN shows the least hydrogen storage performance (1.35 wt. %). Accordingly, the developed hydride system (Ti6Al4V-5h-BN) may be competitive for use in applications involving short-range continuous vehicles (~50-100km) as well as stationary applications such as electrochemical devices, large-scale storage cylinders in hydrogen production locations, and hydrogen filling stations.

Keywords: hydrogen storage, Ti6Al4V hydride system, pressure-composition-temperature measurements, refractory nitride additives, spark plasma sintering, Ti6Al4V-based materials

Procedia PDF Downloads 43
9378 Modified Fractional Curl Operator

Authors: Rawhy Ismail

Abstract:

Applying fractional calculus in the field of electromagnetics shows significant results. The fractionalization of the conventional curl operator leads to having additional solutions to an electromagnetic problem. This work restudies the concept of the fractional curl operator considering fractional time derivatives in Maxwell’s curl equations. In that sense, a general scheme for the wave loss term is introduced and the degree of freedom of the system is affected through imposing the new fractional parameters. The conventional case is recovered by setting all fractional derivatives to unity.

Keywords: curl operator, fractional calculus, fractional curl operators, Maxwell equations

Procedia PDF Downloads 461
9377 Improving the Gain of a Multiband Antenna by Adding an Artificial Magnetic Conductor Metasurface

Authors: Amira Bousselmi

Abstract:

This article presents a PIFA antenna designed for geolocation applications (GNSS) operating on 1.278 GHz, 2.8 GHz, 5.7 GHz and 10 GHz. To improve the performance of the antenna, an artificial magnetic conductor structure (AMC) was used. Adding the antenna with AMC resulted in a measured gain of 4.78 dBi. The results of simulations and measurements are presented. CST Microwave Studio is used to design and compare antenna performance. An antenna design methodology, design and characterization of the AMC surface are described as well as the simulated and measured performances of the AMC antenna are then discussed. Finally, in Section V, there is a conclusion.

Keywords: antenna multiband, global navigation system, AMC, Galeleo

Procedia PDF Downloads 57
9376 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 126
9375 Multi-Agent System Based Solution for Operating Agile and Customizable Micro Manufacturing Systems

Authors: Dylan Santos De Pinho, Arnaud Gay De Combes, Matthieu Steuhlet, Claude Jeannerat, Nabil Ouerhani

Abstract:

The Industry 4.0 initiative has been launched to address huge challenges related to ever-smaller batch sizes. The end-user need for highly customized products requires highly adaptive production systems in order to keep the same efficiency of shop floors. Most of the classical Software solutions that operate the manufacturing processes in a shop floor are based on rigid Manufacturing Execution Systems (MES), which are not capable to adapt the production order on the fly depending on changing demands and or conditions. In this paper, we present a highly modular and flexible solution to orchestrate a set of production systems composed of a micro-milling machine-tool, a polishing station, a cleaning station, a part inspection station, and a rough material store. The different stations are installed according to a novel matrix configuration of a 3x3 vertical shelf. The different cells of the shelf are connected through horizontal and vertical rails on which a set of shuttles circulate to transport the machined parts from a station to another. Our software solution for orchestrating the tasks of each station is based on a Multi-Agent System. Each station and each shuttle is operated by an autonomous agent. All agents communicate with a central agent that holds all the information about the manufacturing order. The core innovation of this paper lies in the path planning of the different shuttles with two major objectives: 1) reduce the waiting time of stations and thus reduce the cycle time of the entire part, and 2) reduce the disturbances like vibration generated by the shuttles, which highly impacts the manufacturing process and thus the quality of the final part. Simulation results show that the cycle time of the parts is reduced by up to 50% compared with MES operated linear production lines while the disturbance is systematically avoided for the critical stations like the milling machine-tool.

Keywords: multi-agent systems, micro-manufacturing, flexible manufacturing, transfer systems

Procedia PDF Downloads 117
9374 Innovating Electronics Engineering for Smart Materials Marketing

Authors: Muhammad Awais Kiani

Abstract:

The field of electronics engineering plays a vital role in the marketing of smart materials. Smart materials are innovative, adaptive materials that can respond to external stimuli, such as temperature, light, or pressure, in order to enhance performance or functionality. As the demand for smart materials continues to grow, it is crucial to understand how electronics engineering can contribute to their marketing strategies. This abstract presents an overview of the role of electronics engineering in the marketing of smart materials. It explores the various ways in which electronics engineering enables the development and integration of smart features within materials, enhancing their marketability. Firstly, electronics engineering facilitates the design and development of sensing and actuating systems for smart materials. These systems enable the detection and response to external stimuli, providing valuable data and feedback to users. By integrating sensors and actuators into materials, their functionality and performance can be significantly enhanced, making them more appealing to potential customers. Secondly, electronics engineering enables the creation of smart materials with wireless communication capabilities. By incorporating wireless technologies such as Bluetooth or Wi-Fi, smart materials can seamlessly interact with other devices, providing real-time data and enabling remote control and monitoring. This connectivity enhances the marketability of smart materials by offering convenience, efficiency, and improved user experience. Furthermore, electronics engineering plays a crucial role in power management for smart materials. Implementing energy-efficient systems and power harvesting techniques ensures that smart materials can operate autonomously for extended periods. This aspect not only increases their market appeal but also reduces the need for constant maintenance or battery replacements, thus enhancing customer satisfaction. Lastly, electronics engineering contributes to the marketing of smart materials through innovative user interfaces and intuitive control mechanisms. By designing user-friendly interfaces and integrating advanced control systems, smart materials become more accessible to a broader range of users. Clear and intuitive controls enhance the user experience and encourage wider adoption of smart materials in various industries. In conclusion, electronics engineering significantly influences the marketing of smart materials by enabling the design of sensing and actuating systems, wireless connectivity, efficient power management, and user-friendly interfaces. The integration of electronics engineering principles enhances the functionality, performance, and marketability of smart materials, making them more adaptable to the growing demand for innovative and connected materials in diverse industries.

Keywords: electronics engineering, smart materials, marketing, power management

Procedia PDF Downloads 47
9373 Numerical and Experimental Comparison of Surface Pressures around a Scaled Ship Wind-Assisted Propulsion System

Authors: James Cairns, Marco Vezza, Richard Green, Donald MacVicar

Abstract:

Significant legislative changes are set to revolutionise the commercial shipping industry. Upcoming emissions restrictions will force operators to look at technologies that can improve the efficiency of their vessels -reducing fuel consumption and emissions. A device which may help in this challenge is the Ship Wind-Assisted Propulsion system (SWAP), an actively controlled aerofoil mounted vertically on the deck of a ship. The device functions in a similar manner to a sail on a yacht, whereby the aerodynamic forces generated by the sail reach an equilibrium with the hydrodynamic forces on the hull and a forward velocity results. Numerical and experimental testing of the SWAP device is presented in this study. Circulation control takes the form of a co-flow jet aerofoil, utilising both blowing from the leading edge and suction from the trailing edge. A jet at the leading edge uses the Coanda effect to energise the boundary layer in order to delay flow separation and create high lift with low drag. The SWAP concept has been originated by the research and development team at SMAR Azure Ltd. The device will be retrofitted to existing ships so that a component of the aerodynamic forces acts forward and partially reduces the reliance on existing propulsion systems. Wind tunnel tests have been carried out at the de Havilland wind tunnel at the University of Glasgow on a 1:20 scale model of this system. The tests aim to understand the airflow characteristics around the aerofoil and investigate the approximate lift and drag coefficients that an early iteration of the SWAP device may produce. The data exhibits clear trends of increasing lift as injection momentum increases, with critical flow attachment points being identified at specific combinations of jet momentum coefficient, Cµ, and angle of attack, AOA. Various combinations of flow conditions were tested, with the jet momentum coefficient ranging from 0 to 0.7 and the AOA ranging from 0° to 35°. The Reynolds number across the tested conditions ranged from 80,000 to 240,000. Comparisons between 2D computational fluid dynamics (CFD) simulations and the experimental data are presented for multiple Reynolds-Averaged Navier-Stokes (RANS) turbulence models in the form of normalised surface pressure comparisons. These show good agreement for most of the tested cases. However, certain simulation conditions exhibited a well-documented shortcoming of RANS-based turbulence models for circulation control flows and over-predicted surface pressures and lift coefficient for fully attached flow cases. Work must be continued in finding an all-encompassing modelling approach which predicts surface pressures well for all combinations of jet injection momentum and AOA.

Keywords: CFD, circulation control, Coanda, turbo wing sail, wind tunnel

Procedia PDF Downloads 122
9372 Influence Zone of Strip Footing on Untreated and Cement Treated Sand Mat Underlain by Soft Clay (2nd reviewed)

Authors: Sharifullah Ahmed

Abstract:

Shallow foundation on soft soils without ground improvement can represent a high level of settlement. In such a case, an alternative to pile foundations may be shallow strip footings placed on a soil system in which the upper layer is untreated or cement-treated compacted sand to limit the settlement within a permissible level. This research work deals with a rigid plane-strain strip footing of 2.5m width placed on a soil consisting of untreated or cement treated sand layer underlain by homogeneous soft clay. Both the thin and thick compared the footing width was considered. The soft inorganic cohesive NC clay layer is considered undrained for plastic loading stages and drained in consolidation stages, and the sand layer is drained in all loading stages. FEM analysis was done using PLAXIS 2D Version 8.0 with a model consisting of clay deposits of 15m thickness and 18m width. The soft clay layer was modeled using the Hardening Soil Model, Soft Soil Model, Soft Soil Creep model, and the upper improvement layer was modeled using only the Hardening Soil Model. The system is considered fully saturated. The value of natural void ratio 1.2 is used. Total displacement fields of strip footing and subsoil layers in the case of Untreated and Cement treated Sand as Upper layer are presented. For Hi/B =0.6 or above, the distribution of major deformation within an upper layer and the influence zone of footing is limited in an upper layer which indicates the complete effectiveness of the upper layer in bearing the foundation effectively in case of the untreated upper layer. For Hi/B =0.3 or above, the distribution of major deformation occurred within an upper layer, and the function of footing is limited in the upper layer. This indicates the complete effectiveness of the cement-treated upper layer. Brittle behavior of cemented sand and fracture or cracks is not considered in this analysis.

Keywords: displacement, ground improvement, influence depth, PLAXIS 2D, primary and secondary settlement, sand mat, soft clay

Procedia PDF Downloads 76
9371 Challenges to Quality Primary Health Care in Saudi Arabia and Potential Improvements Implemented by Other Systems

Authors: Hilal Al Shamsi, Abdullah Almutairi

Abstract:

Introduction: As primary healthcare centres play an important role in implementing Saudi Arabia’s health strategy, this paper offers a review of publications on the quality of the country’s primary health care. With the aim of deciding on solutions for improvement, it provides an overview of healthcare quality in this context and indicates barriers to quality. Method: Using two databases, ProQuest and Scopus, data extracted from published articles were systematically analysed for determining the care quality in Saudi primary health centres and obstacles to achieving higher quality. Results: Twenty-six articles met the criteria for inclusion in this review. The components of healthcare quality were examined in terms of the access to and effectiveness of interpersonal and clinical care. Good access and effective care were identified in such areas as maternal health care and the control of epidemic diseases, whereas poor access and effectiveness of care were shown for chronic disease management programmes, referral patterns (in terms of referral letters and feedback reports), health education and interpersonal care (in terms of language barriers). Several factors were identified as barriers to high-quality care. These included problems with evidence-based practice implementation, professional development, the use of referrals to secondary care and organisational culture. Successful improvements have been implemented by other systems, such as mobile medical units, electronic referrals, online translation tools and mobile devices and their applications; these can be implemented in Saudi Arabia for improving the quality of the primary healthcare system in this country. Conclusion: The quality of primary health care in Saudi Arabia varies among the different services. To improve quality, management programmes and organisational culture must be promoted in primary health care. Professional development strategies are also needed for improving the skills and knowledge of healthcare professionals. Potential improvements can be implemented to improve the quality of the primary health system.

Keywords: quality, primary health care, Saudi Arabia, health centres, general medical

Procedia PDF Downloads 174
9370 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 278
9369 Quantum Teleportation Using W-BELL and Bell-GHZ Channels

Authors: Abhinav Pandey

Abstract:

Teleportation is the transfer of information between two particles without physically being in contact with each other. It has been around in Quantum computation and has been used in theoretical physics. Using the Entangled pair we can achieve teleportation up to 100% out of probable measurements. We introduce a 5-qubit general entanglement system using W-BELL and BELL-GHZ channel pairs and show its usefulness in teleportation. In this paper, we use these channels to achieve teleportation probabilistically conventionally through nonteleporting channels, which has never been achieved before. In this paper, we compare and determine which channel is better in terms of probabilistic results of teleportation of single qubits using W-Bell and Bell-GHZ channels.

Keywords: entanglement, teleportation, no cloning theorem, quantum mechanics, probability

Procedia PDF Downloads 25
9368 A Paradigm Shift into the Primary Teacher Education Program in Bangladesh

Authors: Happy Kumar Das, Md. Shahriar Shafiq

Abstract:

This paper portrays an assumed change in the primary teacher education program in Bangladesh. An initiative has been taken with a vision to ensure an integrated approach to developing trainee teachers’ knowledge and understanding about learning at a deeper level, and with that aim, the Diploma in Primary Education (DPEd) program replaces the Certificate-in-Education (C-in-Ed) program in Bangladeshi context for primary teachers. The stated professional values of the existing program such as ‘learner-centered’, ‘reflective’ approach to pedagogy tend to contradict the practice exemplified through the delivery mechanism. To address the challenges, through the main two components (i) Training Institute-based learning and (ii) School-based learning, the new program tends to cover knowledge and value that underpin the actual practice of teaching. These two components are given approximately equal weighting within the program in terms of both time, content and assessment as the integration seeks to combine theoretical knowledge with practical knowledge and vice versa. The curriculum emphasizes a balance between the taught modules and the components of the practicum. For example, the theories of formative and summative assessment techniques are elaborated through focused reflection on case studies as well as observation and teaching practice in the classroom. The key ideology that is reflected through this newly developed program is teacher’s belief in ‘holistic education’ that can lead to creating opportunities for skills development in all three (Cognitive, Social and Affective) domains simultaneously. The proposed teacher education program aims to address these areas of generic skill development alongside subject-specific learning outcomes. An exploratory study has been designed in this regard where 7 Primary Teachers’ Training Institutes (PTIs) in 7 divisions of Bangladesh was used for experimenting DPEd program. The analysis was done based on document analysis, periodical monitoring report and empirical data gathered from the experimental PTIs. The findings of the study revealed that the intervention brought positive change in teachers’ professional beliefs, attitude and skills along with improvement of school environment. Teachers in training schools work together for collective professional development where they support each other through lesson study, action research, reflective journals, group sharing and so on. Although the DPEd program addresses the above mentioned factors, one of the challenges of the proposed program is the issue of existing capacity and capabilities of the PTIs towards its effective implementation.

Keywords: Bangladesh, effective implementation, primary teacher education, reflective approach

Procedia PDF Downloads 201
9367 Fire Resilient Cities: The Impact of Fire Regulations, Technological and Community Resilience

Authors: Fanny Guay

Abstract:

Building resilience, sustainable buildings, urbanization, climate change, resilient cities, are just a few examples of where the focus of research has been in the last few years. It is obvious that there is a need to rethink how we are building our cities and how we are renovating our existing buildings. However, the question remaining is how can we assure that we are building sustainable yet resilient cities? There are many aspects one can touch upon when discussing resilience in cities, but after the event of Grenfell in June 2017, it has become clear that fire resilience must be a priority. We define resilience as a holistic approach including communities, society and systems, focusing not only on resisting the effects of a disaster, but also how it will cope and recover from it. Cities are an example of such a system, where components such as buildings have an important role to play. A building on fire will have an impact on the community, the economy, the environment, and so the entire system. Therefore, we believe that fire and resilience go hand in hand when we discuss building resilient cities. This article aims at discussing the current state of the concept of fire resilience and suggests actions to support the built of more fire resilient buildings. Using the case of Grenfell and the fire safety regulations in the UK, we will briefly compare the fire regulations in other European countries, more precisely France, Germany and Denmark, to underline the difference and make some suggestions to increase fire resilience via regulation. For this research, we will also include other types of resilience such as technological resilience, discussing the structure of buildings itself, as well as community resilience, considering the role of communities in building resilience. Our findings demonstrate that to increase fire resilience, amending existing regulations might be necessary, for example, how we performed reaction to fire tests and how we classify building products. However, as we are looking at national regulations, we are only able to make general suggestions for improvement. Another finding of this research is that the capacity of the community to recover and adapt after a fire is also an essential factor. Fundamentally, fire resilience, technological resilience and community resilience are closely connected. Building resilient cities is not only about sustainable buildings or energy efficiency; it is about assuring that all the aspects of resilience are included when building or renovating buildings. We must ask ourselves questions as: Who are the users of this building? Where is the building located? What are the components of the building, how was it designed and which construction products have been used? If we want to have resilient cities, we must answer these basic questions and assure that basic factors such as fire resilience are included in our assessment.

Keywords: buildings, cities, fire, resilience

Procedia PDF Downloads 146
9366 Identifying, Reporting and Preventing Medical Errors Among Nurses Working in Critical Care Units At Kenyatta National Hospital, Kenya: Closing the Gap Between Attitude and Practice

Authors: Jared Abuga, Wesley Too

Abstract:

Medical error is the third leading cause of death in US, with approximately 98,000 deaths occurring every year as a result of medical errors. The world financial burden of medication errors is roughly USD 42 billion. Medication errors may lead to at least one death daily and injure roughly 1.3 million people every year. Medical error reporting is essential in creating a culture of accountability in our healthcare system. Studies have shown that attitudes and practice of healthcare workers in reporting medical errors showed that the major factors in under-reporting of errors included work stress and fear of medico-legal consequences due to the disclosure of error. Further, the majority believed that increase in reporting medical errors would contribute to a better system. Most hospitals depend on nurses to discover medication errors because they are considered to be the sources of these errors, as contributors or mere observers, consequently, the nurse’s perception of medication errors and what needs to be done is a vital feature to reducing incidences of medication errors. We sought to explore knowledge among nurses on medical errors and factors affecting or hindering reporting of medical errors among nurses working at the emergency unit, KNH. Critical care nurses are faced with many barriers to completing incident reports on medication errors. One of these barriers which contribute to underreporting is a lack of education and/or knowledge regarding medication errors and the reporting process. This study, therefore, sought to determine the availability and the use of reporting systems for medical errors in critical care unity. It also sought to establish nurses’ perception regarding medical errors and reporting and document factors facilitating timely identification and reporting of medical errors in critical care settings. Methods: The study used cross-section study design to collect data from 76 critical care nurses from Kenyatta Teaching & Research National Referral Hospital, Kenya. Data analysis and results is ongoing. By October 2022, we will have analysis, results, discussions, and recommendations of the study for purposes of the conference in 2023

Keywords: errors, medical, kenya, nurses, safety

Procedia PDF Downloads 221
9365 The Cost-Effectiveness of Pancreatic Surgical Cancer Care in the US vs. the European Union: Results of a Review of the Peer-Reviewed Scientific Literature

Authors: Shannon Hearney, Jeffrey Hoch

Abstract:

While all cancers are costly to treat, pancreatic cancer is a notoriously costly and deadly form of cancer. Across the world there are a variety of treatment centers ranging from small clinics to large, high-volume hospitals as well as differing structures of payment and access. It has been noted that centers that treat a high volume of pancreatic cancer patients have higher quality of care, it is unclear if that care is cost-effective. In the US there is no clear consensus on the cost-effectiveness of high-volume centers for the surgical care of pancreatic cancer. Other European countries, like Finland and Italy have shown that high-volume centers have lower mortality rates and can have lower costs, there however, is still a gap in knowledge about these centers cost-effectiveness globally. This paper seeks to review the current literature in Europe and the US to gain a better understanding of the state of high-volume pancreatic surgical centers cost-effectiveness while considering the contextual differences in health system structure. A review of major reference databases such as Medline, Embase and PubMed will be conducted for cost-effectiveness studies on the surgical treatment of pancreatic cancer at high-volume centers. Possible MeSH terms to be included, but not limited to, are: “pancreatic cancer”, “cost analysis”, “cost-effectiveness”, “economic evaluation”, “pancreatic neoplasms”, “surgical”, “Europe” “socialized medicine”, “privatized medicine”, “for-profit”, and “high-volume”. Studies must also have been available in the English language. This review will encompass European scientific literature, as well as those in the US. Based on our preliminary findings, we anticipate high-volume hospitals to provide better care at greater costs. We anticipate that high-volume hospitals may be cost-effective in different contexts depending on the national structure of a healthcare system. Countries with more centralized and socialized healthcare may yield results that are more cost-effective. High-volume centers may differ in their cost-effectiveness of the surgical care of pancreatic cancer internationally especially when comparing those in the United States to others throughout Europe.

Keywords: cost-effectiveness analysis, economic evaluation, pancreatic cancer, scientific literature review

Procedia PDF Downloads 77
9364 The Relationship between Creative Imagination and Curriculum

Authors: Faride Hashemiannejad, Shima Oloomi

Abstract:

Imagination is one of the important elements of creative thinking which as a skill needs attention by the educational system. Although most students learn reading, writing, and arithmetic skills well, they lack high level thinking skills like creative thinking. Therefore, in the information age and in the beginning of entry to knowledge-based society, the educational system needs to think over its goals and mission, and concentrate on creativity-based curriculum. From among curriculum elements-goals, content, method and evaluation “method” is a major domain whose reform can pave the way for fostering imagination and creativity. The purpose of this study was examining the relationship between creativity development and curriculum. Research questions were: (1) is there a relationship between the cognitive-emotional structure of the classroom and creativity development? (2) Is there a relationship between the environmental-social structure of the classroom and creativity development? (3) Is there a relationship between the thinking structure of the classroom and creativity development? (4) Is there a relationship between the physical structure of the classroom and creativity development? (5) Is there a relationship between the instructional structure of the classroom and creativity development? Method: This research is a applied research and the research method is Correlational research. Participants: The total number of participants in this study included 894 students from High school through 11th grade from seven schools of seven zones in Mashad city. Sampling Plan: Sampling was selected based on Random Multi State. Measurement: The dependent measure in this study was: (a) the Test of Creative Thinking, (b) The researcher-made questionnaire includes five fragments, cognitive, emotional structure, environmental social structure, thinking structure, physical structure, and instructional structure. The Results Show: There was significant relationship between the cognitive-emotional structure of the classroom and student’s creativity development (sig=0.139). There was significant relationship between the environmental-social structure of the classroom and student’s creativity development (sig=0.006). There was significant relationship between the thinking structure of the classroom and student’s creativity development (sig=0.004). There was not significant relationship between the physical structure of the classroom and student’s creativity development (sig=0.215). There was significant relationship between the instructional structure of the classroom and student’s creativity development (sig=0.003). These findings denote if students feel secure, calm and confident, they can experience creative learning. Also the quality of coping with students’ questions, imaginations and risks can influence on their creativity development.

Keywords: imagination, creativity, curriculum, bioinformatics, biomedicine

Procedia PDF Downloads 463
9363 Pursuing Knowledge Society Excellence: Knowledge Management and Open Innovation Platforms for Research, Industry and Business Collaboration in Singapore

Authors: Irina-Emily Hansen, Ola Jon Mork

Abstract:

The European economic growth strategy and supporting it framework for research and innovation highlight the importance of nurturing new open innovation in order to strengthen Europe’s competitiveness. One of the main approaches to enhance innovation in European society is the Triple Helix model that centres on science- industry collaboration where the universities are assigned the managerial role. In spite of the defined collaboration strategy, the collaboration between academics and in-dustry in Europe has still many challenges. Many of them are explained by culture difference: academic culture aims towards scientific knowledge, while businesses are oriented towards pro-duction and profitable results; also execution of collaborative projects is seen differently by part-ners involved. That proves that traditional management strategies applied to collaboration between researchers and businesses are not effective. There is a need for dynamic strategies that can support the interaction between researchers and industry intensifying knowledge co-creation and contributing to development of national innovation system (NIS) by incorporating individual, organizational and inter-organizational learning. In order to find a good subject to follow, the researchers of a given paper have investigated one of the most rapidly developing knowledge-based, innovation society, Singapore. Singapore does not dispose much land- or sea- resources that normally provide income for any country. Therefore, Singapore was forced to think differently and build society on resources that are available: talented people and knowledge. Singapore has during the last twenty years developed attracting high rated university camps, research institutions and leading industrial companies from all over the world. This article elucidates and elaborates Singapore’s national innovation strategies from Knowledge Management perspective. The research is done on the variety of organizations that enable and support knowledge development in this state: governmental research and development (R&D) centers in universities, private talent incubators for entrepreneurs, and industrial companies with own R&D departments. The research methods are based on presentations, documents, and visits at a number of universities, research institutes, innovation parks, governmental institutions, industrial companies and innovation exhibitions in Singapore. In addition, a literature review of science articles is made regarding the topic. The first finding is that objectives of collaboration between researchers, entrepreneurs and industry in Singapore correspond primary goals of the state: knowledge- and economy growth. There are common objectives for all stakeholders on all national levels. The second finding is that Singapore has enabled system on a national level that supports innovation the entire way from fostering or capturing the new knowledge, providing knowledge exchange and co-creation to application of it in real-life. The conclusion is that innovation means not only new idea, but also the enabling mechanism for its execution and the marked-oriented approach in order that new knowledge can be absorbed in society. The future research can be done with regards to application of Singapore knowledge management strategy in innovation to European countries.

Keywords: knowledge management strategy, national innovation system, research industry and business collaboration, knowledge enabling

Procedia PDF Downloads 166
9362 Unsupervised Part-of-Speech Tagging for Amharic Using K-Means Clustering

Authors: Zelalem Fantahun

Abstract:

Part-of-speech tagging is the process of assigning a part-of-speech or other lexical class marker to each word into naturally occurring text. Part-of-speech tagging is the most fundamental and basic task almost in all natural language processing. In natural language processing, the problem of providing large amount of manually annotated data is a knowledge acquisition bottleneck. Since, Amharic is one of under-resourced language, the availability of tagged corpus is the bottleneck problem for natural language processing especially for POS tagging. A promising direction to tackle this problem is to provide a system that does not require manually tagged data. In unsupervised learning, the learner is not provided with classifications. Unsupervised algorithms seek out similarity between pieces of data in order to determine whether they can be characterized as forming a group. This paper explicates the development of unsupervised part-of-speech tagger using K-Means clustering for Amharic language since large amount of data is produced in day-to-day activities. In the development of the tagger, the following procedures are followed. First, the unlabeled data (raw text) is divided into 10 folds and tokenization phase takes place; at this level, the raw text is chunked at sentence level and then into words. The second phase is feature extraction which includes word frequency, syntactic and morphological features of a word. The third phase is clustering. Among different clustering algorithms, K-means is selected and implemented in this study that brings group of similar words together. The fourth phase is mapping, which deals with looking at each cluster carefully and the most common tag is assigned to a group. This study finds out two features that are capable of distinguishing one part-of-speech from others these are morphological feature and positional information and show that it is possible to use unsupervised learning for Amharic POS tagging. In order to increase performance of the unsupervised part-of-speech tagger, there is a need to incorporate other features that are not included in this study, such as semantic related information. Finally, based on experimental result, the performance of the system achieves a maximum of 81% accuracy.

Keywords: POS tagging, Amharic, unsupervised learning, k-means

Procedia PDF Downloads 426
9361 Embodied Cognition as a Concept of Educational Neuroscience and Phenomenology

Authors: Elham Shirvani-Ghadikolaei

Abstract:

In this paper, we examine the connection between the human mind and body within the framework of Merleau-Ponty's phenomenology. We study the role of this connection in designing more efficient learning environments, alongside the findings in physical recognition and educational neuroscience. Our research shows the interplay between the mind and the body in the external world and discusses its implications. Based on these observations, we make suggestions as to how the educational system can benefit from taking into account the interaction between the mind and the body in educational affairs.

Keywords: educational neurosciences, embodied cognition, pedagogical neurosciences, phenomenology

Procedia PDF Downloads 294
9360 Formulating a Definition of Hate Speech: From Divergence to Convergence

Authors: Avitus A. Agbor

Abstract:

Numerous incidents, ranging from trivial to catastrophic, do come to mind when one reflects on hate. The victims of these belong to specific identifiable groups within communities. These experiences evoke discussions on Islamophobia, xenophobia, homophobia, anti-Semitism, racism, ethnic hatred, atheism, and other brutal forms of bigotry. Common to all these is an invisible but portent force that drives all of them: hatred. Such hatred is usually fueled by a profound degree of intolerance (to diversity) and the zeal to impose on others their beliefs and practices which they consider to be the conventional norm. More importantly, the perpetuation of these hateful acts is the unfortunate outcome of an overplay of invectives and hate speech which, to a greater extent, cannot be divorced from hate. From a legal perspective, acknowledging the existence of an undeniable link between hate speech and hate is quite easy. However, both within and without legal scholarship, the notion of “hate speech” remains a conundrum: a phrase that is quite easily explained through experiences than propounding a watertight definition that captures the entire essence and nature of what it is. The problem is further compounded by a few factors: first, within the international human rights framework, the notion of hate speech is not used. In limiting the right to freedom of expression, the ICCPR simply excludes specific kinds of speeches (but does not refer to them as hate speech). Regional human rights instruments are not so different, except for the subsequent developments that took place in the European Union in which the notion has been carefully delineated, and now a much clearer picture of what constitutes hate speech is provided. The legal architecture in domestic legal systems clearly shows differences in approaches and regulation: making it more difficult. In short, what may be hate speech in one legal system may very well be acceptable legal speech in another legal system. Lastly, the cornucopia of academic voices on the issue of hate speech exude the divergence thereon. Yet, in the absence of a well-formulated and universally acceptable definition, it is important to consider how hate speech can be defined. Taking an evidence-based approach, this research looks into the issue of defining hate speech in legal scholarship and how and why such a formulation is of critical importance in the prohibition and prosecution of hate speech.

Keywords: hate speech, international human rights law, international criminal law, freedom of expression

Procedia PDF Downloads 49
9359 Using Business Intelligence Capabilities to Improve the Quality of Decision-Making: A Case Study of Mellat Bank

Authors: Jalal Haghighat Monfared, Zahra Akbari

Abstract:

Today, business executives need to have useful information to make better decisions. Banks have also been using information tools so that they can direct the decision-making process in order to achieve their desired goals by rapidly extracting information from sources with the help of business intelligence. The research seeks to investigate whether there is a relationship between the quality of decision making and the business intelligence capabilities of Mellat Bank. Each of the factors studied is divided into several components, and these and their relationships are measured by a questionnaire. The statistical population of this study consists of all managers and experts of Mellat Bank's General Departments (including 190 people) who use commercial intelligence reports. The sample size of this study was 123 randomly determined by statistical method. In this research, relevant statistical inference has been used for data analysis and hypothesis testing. In the first stage, using the Kolmogorov-Smirnov test, the normalization of the data was investigated and in the next stage, the construct validity of both variables and their resulting indexes were verified using confirmatory factor analysis. Finally, using the structural equation modeling and Pearson's correlation coefficient, the research hypotheses were tested. The results confirmed the existence of a positive relationship between decision quality and business intelligence capabilities in Mellat Bank. Among the various capabilities, including data quality, correlation with other systems, user access, flexibility and risk management support, the flexibility of the business intelligence system was the most correlated with the dependent variable of the present research. This shows that it is necessary for Mellat Bank to pay more attention to choose the required business intelligence systems with high flexibility in terms of the ability to submit custom formatted reports. Subsequently, the quality of data on business intelligence systems showed the strongest relationship with quality of decision making. Therefore, improving the quality of data, including the source of data internally or externally, the type of data in quantitative or qualitative terms, the credibility of the data and perceptions of who uses the business intelligence system, improves the quality of decision making in Mellat Bank.

Keywords: business intelligence, business intelligence capability, decision making, decision quality

Procedia PDF Downloads 99
9358 Studies of Heavy Metal Ions Removal Efficiency in the Presence of Anionic Surfactant Using Ion Exchangers

Authors: Anna Wolowicz, Katarzyna Staszak, Zbigniew Hubicki

Abstract:

Nowadays heavy metal ions as well as surfactants are widely used throughout the world due to their useful properties. The consequence of such widespread use is their significant production. On the other hand, the increasing demand for surfactants and heavy metal ions results in production of large amounts of wastewaters which are discharged to the environment from mining, metal plating, pharmaceutical, cosmetic, fertilizer, paper, pesticide and electronic industries, pigments producing, petroleum refining and from autocatalyst, fibers, food, polymer industries etc. Heavy metal ions are non-biodegradable in the environment, cable of accumulation in living organisms and organs, toxic and carcinogenic. On the other hand, not only heavy metal ions but also surfactants affect the purity of water and soils. Some of surfactants are also toxic, harmful and dangerous because they are able to penetrate into surface waters causing foaming, blocked diffusion of oxygen from the atmosphere and act as emulsifiers of hydrophobic substances and increase solubility of many the dangerous pollutants. Among surfactants the anionic ones dominate and their share in the global production of surfactants is around 50 ÷ 60%. Due to the negative impact of heavy metals and surfactants on aquatic ecosystems and living organisms, removal and monitoring of their concentration in the environment is extremely important. Surfactants and heavy metal ions removal can be achieved by different biological and physicochemical methods. The adsorption as well as the ion-exchange methods play here a significant role. The aim of this study was heavy metal ions removal from aqueous solutions using different types of ion exchangers in the presence of anionic surfactants. Preliminary studies of copper(II), nickel(II), zinc(II) and cobalt(II) removal from acidic solutions using ion exchangers (Lewatit MonoPlus TP 220, Lewatit MonoPlus SR 7, Purolite A 400 TL, Purolite A 830, Purolite S 984, Dowex PSR 2, Dowex PSR3, Lewatit AF-5) allowed to select the most effective ones for the above mentioned sorbates and then to checking their removal efficiency in the presence of anionic surfactants. As it was found out Lewatit MonoPlus TP 220 of the chelating type, show the highest sorption capacities for copper(II) ions in comparison with the other ion exchangers under discussion, e.g. 9.98 mg/g (0.1 M HCl); 9.12 mg/g (6 M HCl). Moreover, cobalt(II) removal efficiency was the highest in 0.1 M HCl using also Lewatit MonoPlus TP 220 (6.9 mg/g) similar to zinc(II) (9.1 mg/g) and nickiel(II) (6.2 mg/g). As the anionic surfactant sodium dodecyl sulphate (SDS) was used and surfactant parameters such as viscosity (η), density (ρ) and critical micelle concentration (CMC) were obtained: η = 1.13 ± 0,01 mPa·s; ρ = 999.76 mg/cm3; CMC = 2.26 g/cm3. The studies of copper(II) removal from acidic solutions in the presence of SDS of different concentration show negligible effects on copper(II) removal efficiency. The sorption capacity of Cu(II) from 0.1 M acidic solution of 500 mg/L initial concentration was equal to 46.8 mg/g whereas in the presence of SDS 45.3 mg/g (0.1 mg SDS/L), 47.1 mg/g (0.5 mg SDS/L), 46.6 mg/g (1 mg SDS/L).

Keywords: anionic surfactant, heavy metal ions, ion exchanger, removal

Procedia PDF Downloads 124
9357 The Predictive Utility of Subjective Cognitive Decline Using Item Level Data from the Everyday Cognition (ECog) Scales

Authors: J. Fox, J. Randhawa, M. Chan, L. Campbell, A. Weakely, D. J. Harvey, S. Tomaszewski Farias

Abstract:

Early identification of individuals at risk for conversion to dementia provides an opportunity for preventative treatment. Many older adults (30-60%) report specific subjective cognitive decline (SCD); however, previous research is inconsistent in terms of what types of complaints predict future cognitive decline. The purpose of this study is to identify which specific complaints from the Everyday Cognition Scales (ECog) scales, a measure of self-reported concerns for everyday abilities across six cognitive domains, are associated with: 1) conversion from a clinical diagnosis of normal to either MCI or dementia (categorical variable) and 2) progressive cognitive decline in memory and executive function (continuous variables). 415 cognitively normal older adults were monitored annually for an average of 5 years. Cox proportional hazards models were used to assess associations between self-reported ECog items and progression to impairment (MCI or dementia). A total of 114 individuals progressed to impairment; the mean time to progression was 4.9 years (SD=3.4 years, range=0.8-13.8). Follow-up models were run controlling for depression. A subset of individuals (n=352) underwent repeat cognitive assessments for an average of 5.3 years. For those individuals, mixed effects models with random intercepts and slopes were used to assess associations between ECog items and change in neuropsychological measures of episodic memory or executive function. Prior to controlling for depression, subjective concerns on five of the eight Everyday Memory items, three of the nine Everyday Language items, one of the seven Everyday Visuospatial items, two of the five Everyday Planning items, and one of the six Everyday Organization items were associated with subsequent diagnostic conversion (HR=1.25 to 1.59, p=0.003 to 0.03). However, after controlling for depression, only two specific complaints of remembering appointments, meetings, and engagements and understanding spoken directions and instructions were associated with subsequent diagnostic conversion. Episodic memory in individuals reporting no concern on ECog items did not significantly change over time (p>0.4). More complaints on seven of the eight Everyday Memory items, three of the nine Everyday Language items, and three of the seven Everyday Visuospatial items were associated with a decline in episodic memory (Interaction estimate=-0.055 to 0.001, p=0.003 to 0.04). Executive function in those reporting no concern on ECog items declined slightly (p <0.001 to 0.06). More complaints on three of the eight Everyday Memory items and three of the nine Everyday Language items were associated with a decline in executive function (Interaction estimate=-0.021 to -0.012, p=0.002 to 0.04). These findings suggest that specific complaints across several cognitive domains are associated with diagnostic conversion. Specific complaints in the domains of Everyday Memory and Language are associated with a decline in both episodic memory and executive function. Increased monitoring and treatment of individuals with these specific SCD may be warranted.

Keywords: alzheimer’s disease, dementia, memory complaints, mild cognitive impairment, risk factors, subjective cognitive decline

Procedia PDF Downloads 63
9356 Risk-Sharing Financing of Islamic Banks: Better Shielded against Interest Rate Risk

Authors: Mirzet SeHo, Alaa Alaabed, Mansur Masih

Abstract:

In theory, risk-sharing-based financing (RSF) is considered a corner stone of Islamic finance. It is argued to render Islamic banks more resilient to shocks. In practice, however, this feature of Islamic financial products is almost negligible. Instead, debt-based instruments, with conventional like features, have overwhelmed the nascent industry. In addition, the framework of present-day economic, regulatory and financial reality inevitably exposes Islamic banks in dual banking systems to problems of conventional banks. This includes, but is not limited to, interest rate risk. Empirical evidence has, thus far, confirmed such exposures, despite Islamic banks’ interest-free operations. This study applies system GMM in modeling the determinants of RSF, and finds that RSF is insensitive to changes in interest rates. Hence, our results provide support to the “stability” view of risk-sharing-based financing. This suggests RSF as the way forward for risk management at Islamic banks, in the absence of widely acceptable Shariah compliant hedging instruments. Further support to the stability view is given by evidence of counter-cyclicality. Unlike debt-based lending that inflates artificial asset bubbles through credit expansion during the upswing of business cycles, RSF is negatively related to GDP growth. Our results also imply a significantly strong relationship between risk-sharing deposits and RSF. However, the pass-through of these deposits to RSF is economically low. Only about 40% of risk-sharing deposits are channeled to risk-sharing financing. This raises questions on the validity of the industry’s claim that depositors accustomed to conventional banking shun away from risk sharing and signals potential for better balance sheet management at Islamic banks. Overall, our findings suggest that, on the one hand, Islamic banks can gain ‘independence’ from conventional banks and interest rates through risk-sharing products, the potential for which is enormous. On the other hand, RSF could enable policy makers to improve systemic stability and restrain excessive credit expansion through its countercyclical features.

Keywords: Islamic banks, risk-sharing, financing, interest rate, dynamic system GMM

Procedia PDF Downloads 303
9355 Informing, Enabling and Inspiring Social Innovation by Geographic Systems Mapping: A Case Study in Workforce Development

Authors: Cassandra A. Skinner, Linda R. Chamberlain

Abstract:

The nonprofit and public sectors are increasingly turning to Geographic Information Systems for data visualizations which can better inform programmatic and policy decisions. Additionally, the private and nonprofit sectors are turning to systems mapping to better understand the ecosystems within which they operate. This study explores the potential which combining these data visualization methods—a method which is called geographic systems mapping—to create an exhaustive and comprehensive understanding of a social problem’s ecosystem may have in social innovation efforts. Researchers with Grand Valley State University collaborated with Talent 2025 of West Michigan to conduct a mixed-methods research study to paint a comprehensive picture of the workforce development ecosystem in West Michigan. Using semi-structured interviewing, observation, secondary research, and quantitative analysis, data were compiled on workforce development organizations’ locations, programming, metrics for success, partnerships, funding sources, and service language. To best visualize and disseminate the data, a geographic system map was created which identifies programmatic, operational, and geographic gaps in workforce development services of West Michigan. By combining geographic and systems mapping methods, the geographic system map provides insight into the cross-sector relationships, collaboration, and competition which exists among and between workforce development organizations. These insights identify opportunities for and constraints around cross-sectoral social innovation in the West Michigan workforce development ecosystem. This paper will discuss the process utilized to prepare the geographic systems map, explain the results and outcomes, and demonstrate how geographic systems mapping illuminated the needs of the community and opportunities for social innovation. As complicated social problems like unemployment often require cross-sectoral and multi-stakeholder solutions, there is potential for geographic systems mapping to be a tool which informs, enables, and inspires these solutions.

Keywords: cross-sector collaboration, data visualization, geographic systems mapping, social innovation, workforce development

Procedia PDF Downloads 277
9354 Spatial Analysis in the Impact of Aquifer Capacity Reduction on Land Subsidence Rate in Semarang City between 2014-2017

Authors: Yudo Prasetyo, Hana Sugiastu Firdaus, Diyanah Diyanah

Abstract:

The phenomenon of the lack of clean water supply in several big cities in Indonesia is a major problem in the development of urban areas. Moreover, in the city of Semarang, the population density and growth of physical development is very high. Continuous and large amounts of underground water (aquifer) exposure can result in a drastically aquifer supply declining in year by year. Especially, the intensity of aquifer use in the fulfilment of household needs and industrial activities. This is worsening by the land subsidence phenomenon in some areas in the Semarang city. Therefore, special research is needed to know the spatial correlation of the impact of decreasing aquifer capacity on the land subsidence phenomenon. This is necessary to give approve that the occurrence of land subsidence can be caused by loss of balance of pressure on below the land surface. One method to observe the correlation pattern between the two phenomena is the application of remote sensing technology based on radar and optical satellites. Implementation of Differential Interferometric Synthetic Aperture Radar (DINSAR) or Small Baseline Area Subset (SBAS) method in SENTINEL-1A satellite image acquisition in 2014-2017 period will give a proper pattern of land subsidence. These results will be spatially correlated with the aquifer-declining pattern in the same time period. Utilization of survey results to 8 monitoring wells with depth in above 100 m to observe the multi-temporal pattern of aquifer change capacity. In addition, the pattern of aquifer capacity will be validated with 2 underground water cavity maps from observation of ministries of energy and natural resources (ESDM) in Semarang city. Spatial correlation studies will be conducted on the pattern of land subsidence and aquifer capacity using overlapping and statistical methods. The results of this correlation will show how big the correlation of decrease in underground water capacity in influencing the distribution and intensity of land subsidence in Semarang city. In addition, the results of this study will also be analyzed based on geological aspects related to hydrogeological parameters, soil types, aquifer species and geological structures. The results of this study will be a correlation map of the aquifer capacity on the decrease in the face of the land in the city of Semarang within the period 2014-2017. So hopefully the results can help the authorities in spatial planning and the city of Semarang in the future.

Keywords: aquifer, differential interferometric synthetic aperture radar (DINSAR), land subsidence, small baseline area subset (SBAS)

Procedia PDF Downloads 167
9353 Assessment of Indoor Air Pollution in Naturally Ventilated Dwellings of Mega-City Kolkata

Authors: Tanya Kaur Bedi, Shankha Pratim Bhattacharya

Abstract:

The US Environmental Protection Agency defines indoor air pollution as “The air quality within and around buildings, especially as it relates to the health and comfort of building occupants”. According to the 2021 report by the Energy Policy Institute at Chicago, Indian residents, a country which is home to the highest levels of air pollution in the world, lose about 5.9 years from life expectancy due to poor air quality and yet has numerous dwellings dependent on natural ventilation. Currently the urban population spends 90% of the time indoors, this scenario raises a concern for occupant health and well-being. This study attempts to demonstrate the causal relationship between the indoor air pollution and its determining aspects. Detailed indoor air pollution audits were conducted in residential buildings located in Kolkata, India in the months of December and January 2021. According to the air pollution knowledge assessment city program in India, Kolkata is also the second most polluted mega-city after Delhi. Although the air pollution levels are alarming year-long, the winter months are most crucial due to the unfavourable environmental conditions. While emissions remain typically constant throughout the year, cold air is denser and moves slower than warm air, trapping the pollution in place for much longer and consequently is breathed in at a higher rate than the summers. The air pollution monitoring period was selected considering environmental factors and major pollution contributors like traffic and road dust. This study focuses on the relationship between the built environment and the spatial-temporal distribution of air pollutants in and around it. The measured parameters include, temperature, relative humidity, air velocity, particulate matter, volatile organic compounds, formaldehyde, and benzene. A total of 56 rooms were audited, selectively targeting the most dominant middle-income group in the urban area of the metropolitan. The data-collection was conducted using a set of instruments positioned in the human breathing-zone. The study assesses the relationship between indoor air pollution levels and factors determining natural ventilation and air pollution dispersion such as surrounding environment, dominant wind, openable window to floor area ratio, windward or leeward side openings, and natural ventilation type in the room: single side or cross-ventilation, floor height, residents cleaning habits, etc.

Keywords: indoor air quality, occupant health, air pollution, architecture, urban environment

Procedia PDF Downloads 91
9352 Introduction of an Approach of Complex Virtual Devices to Achieve Device Interoperability in Smart Building Systems

Authors: Thomas Meier

Abstract:

One of the major challenges for sustainable smart building systems is to support device interoperability, i.e. connecting sensor or actuator devices from different vendors, and present their functionality to the external applications. Furthermore, smart building systems are supposed to connect with devices that are not available yet, i.e. devices that become available on the market sometime later. It is of vital importance that a sustainable smart building platform provides an appropriate external interface that can be leveraged by external applications and smart services. An external platform interface must be stable and independent of specific devices and should support flexible and scalable usage scenarios. A typical approach applied in smart home systems is based on a generic device interface used within the smart building platform. Device functions, even of rather complex devices, are mapped to that generic base type interface by means of specific device drivers. Our new approach, presented in this work, extends that approach by using the smart building system’s rule engine to create complex virtual devices that can represent the most diverse properties of real devices. We examined and evaluated both approaches by means of a practical case study using a smart building system that we have developed. We show that the solution we present allows the highest degree of flexibility without affecting external application interface stability and scalability. In contrast to other systems our approach supports complex virtual device configuration on application layer (e.g. by administration users) instead of device configuration at platform layer (e.g. platform operators). Based on our work, we can show that our approach supports almost arbitrarily flexible use case scenarios without affecting the external application interface stability. However, the cost of this approach is additional appropriate configuration overhead and additional resource consumption at the IoT platform level that must be considered by platform operators. We conclude that the concept of complex virtual devices presented in this work can be applied to improve the usability and device interoperability of sustainable intelligent building systems significantly.

Keywords: Internet of Things, smart building, device interoperability, device integration, smart home

Procedia PDF Downloads 248
9351 Kalman Filter for Bilinear Systems with Application

Authors: Abdullah E. Al-Mazrooei

Abstract:

In this paper, we present a new kind of the bilinear systems in the form of state space model. The evolution of this system depends on the product of state vector by its self. The well known Lotak Volterra and Lorenz models are special cases of this new model. We also present here a generalization of Kalman filter which is suitable to work with the new bilinear model. An application to real measurements is introduced to illustrate the efficiency of the proposed algorithm.

Keywords: bilinear systems, state space model, Kalman filter, application, models

Procedia PDF Downloads 411
9350 The Communication of Audit Report: Key Audit Matters in United Kingdom

Authors: L. Sierra, N. Gambetta, M. A. Garcia-Benau, M. Orta

Abstract:

Financial scandals and financial crisis have led to an international debate on the value of auditing. In recent years there have been significant legislative reforms aiming to increase markets’ confidence in audit services. In particular, there has been a significant debate on the need to improve the communication of auditors with audit reports users as a way to improve its informative value and thus, to improve audit quality. The International Auditing and Assurance Standards Board (IAASB) has proposed changes to the audit report standards. The International Standard on Auditing 701, Communicating Key Audit Matters (KAM) in the Independent Auditor's Report, has introduced new concepts that go beyond the auditor's opinion and requires to disclose the risks that, from the auditor's point of view, are more significant in the audited company information. Focusing on the companies included in the Financial Times Stock Exchange 100 index, this study aims to focus on the analysis of the determinants of the number of KAM disclosed by the auditor in the audit report and moreover, the analysis of the determinants of the different type of KAM reported during the period 2013-2015. To test the hypotheses in the empirical research, two different models have been used. The first one is a linear regression model to identify the client’s characteristics, industry sector and auditor’s characteristics that are related to the number of KAM disclosed in the audit report. Secondly, a logistic regression model is used to identify the determinants of the number of each KAM type disclosed in the audit report; in line with the risk-based approach to auditing financial statements, we categorized the KAM in 2 groups: Entity-level KAM and Accounting-level KAM. Regarding the auditor’s characteristics impact on the KAM disclosure, the results show that PwC tends to report a larger number of KAM while KPMG tends to report less KAM in the audit report. Further, PwC reports a larger number of entity-level risk KAM while KPMG reports less account-level risk KAM. The results also show that companies paying higher fees tend to have more entity-level risk KAM and less account-level risk KAM. The materiality level is positively related to the number of account-level risk KAM. Additionally, these study results show that the relationship between client’s characteristics and number of KAM is more evident in account-level risk KAM than in entity-level risk KAM. A highly leveraged company carries a great deal of risk, but due to this, they are usually subject to strong capital providers monitoring resulting in less account-level risk KAM. The results reveal that the number of account-level risk KAM is strongly related to the industry sector in which the company operates assets. This study helps to understand the UK audit market, provides information to auditors and finally, it opens new research avenues in the academia.

Keywords: FTSE 100, IAS 701, key audit matters, auditor’s characteristics, client’s characteristics

Procedia PDF Downloads 211