Search results for: mobile cellular network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6772

Search results for: mobile cellular network

802 Analysing the Moderating Effect of Customer Loyalty on Long Run Repurchase Intentions

Authors: John Akpesiri Olotewo

Abstract:

One of the controversies in existing marketing literatures is on how to retain existing and new customers to have repurchase intention in the long-run; however, empirical answer to this question is scanty in existing studies. Thus, this study investigates the moderating effect of consumer loyalty on long-run repurchase intentions in telecommunication industry using Lagos State environs. The study adopted field survey research design using questionnaire to elicit responses from 250 respondents who were selected using random and stratified random sampling techniques from the telecommunication industry in Lagos State, Nigeria. The internal consistency of the research instrument was verified using the Cronbach’s alpha, the result of 0.89 implies the acceptability of the internal consistency of the survey instrument. The test of the research hypotheses were analyzed using Pearson Product Method of Correlation (PPMC), simple regression analysis and inferential statistics with the aid of Statistical Package for Social Science version 20.0 (SPSS). The study confirmed that customer satisfaction has a significant relationship with customer loyalty in the telecommunication industry; also Service quality has a significant relationship with customer loyalty to a brand; loyalty programs have a significant relationship with customer loyalty to a network operator in Nigeria and Customer loyalty has a significant effect on the long run repurchase intentions of the customer. The study concluded that one of the determinants of long term profitability of a business entity is the long run repurchase intentions of its customers which hinges on the level of brand loyalty of the customer. Thus, it was recommended that service providers in Nigeria should improve on factors like customer satisfaction, service quality, and loyalty programs in order to increase the loyalty of their customer to their brands thereby increasing their repurchase intentions.

Keywords: customer loyalty, long run repurchase intentions, brands, service quality and customer satisfaction

Procedia PDF Downloads 230
801 Reimagining the Management of Telco Supply Chain with Blockchain

Authors: Jeaha Yang, Ahmed Khan, Donna L. Rodela, Mohammed A. Qaudeer

Abstract:

Traditional supply chain silos still exist today due to the difficulty of establishing trust between various partners and technological barriers across industries. Companies lose opportunities and revenue and inadvertently make poor business decisions resulting in further challenges. Blockchain technology can bring a new level of transparency through sharing information with a distributed ledger in a decentralized manner that creates a basis of trust for business. Blockchain is a loosely coupled, hub-style communication network in which trading partners can work indirectly with each other for simpler integration, but they work together through the orchestration of their supply chain operations under a coherent process that is developed jointly. A Blockchain increases efficiencies, lowers costs, and improves interoperability to strengthen and automate the supply chain management process while all partners share the risk. Blockchain ledger is built to track inventory lifecycle for supply chain transparency and keeps a journal of inventory movement for real-time reconciliation. State design patterns are used to capture the life cycle (behavior) of inventory management as a state machine for a common, transparent and coherent process which creates an opportunity for trading partners to become more responsive in terms of changes or improvements in process, reconcile discrepancies, and comply with internal governance and external regulations. It enables end-to-end, inter-company visibility at the unit level for more accurate demand planning with better insight into order fulfillment and replenishment.

Keywords: supply chain management, inventory trace-ability, perpetual inventory system, inventory lifecycle, blockchain, inventory consignment, supply chain transparency, digital thread, demand planning, hyper ledger fabric

Procedia PDF Downloads 88
800 The Impact of PM-Based Regulations on the Concentration and Sources of Fine Organic Carbon in the Los Angeles Basin from 2005 to 2015

Authors: Abdulmalik Altuwayjiri, Milad Pirhadi, Sina Taghvaee, Constantinos Sioutas

Abstract:

A significant portion of PM₂.₅ mass concentration is carbonaceous matter (CM), which majorly exists in the form of organic carbon (OC). Ambient OC originates from a multitude of sources and plays an important role in global climate effects, visibility degradation, and human health. In this study, positive matrix factorization (PMF) was utilized to identify and quantify the long-term contribution of PM₂.₅ sources to total OC mass concentration in central Los Angeles (CELA) and Riverside (i.e., receptor site), using the chemical speciation network (CSN) database between 2005 and 2015, a period during which several state and local regulations on tailpipe emissions were implemented in the area. Our PMF resolved five different factors, including tailpipe emissions, non-tailpipe emissions, biomass burning, secondary organic aerosol (SOA), and local industrial activities for both sampling sites. The contribution of vehicular exhaust emissions to the OC mass concentrations significantly decreased from 3.5 µg/m³ in 2005 to 1.5 µg/m³ in 2015 (by about 58%) at CELA, and from 3.3 µg/m³ in 2005 to 1.2 µg/m³ in 2015 (by nearly 62%) at Riverside. Additionally, SOA contribution to the total OC mass, showing higher levels at the receptor site, increased from 23% in 2005 to 33% and 29% in 2010 and 2015, respectively, in Riverside, whereas the corresponding contribution at the CELA site was 16%, 21% and 19% during the same period. The biomass burning maintained an almost constant relative contribution over the whole period. Moreover, while the adopted regulations and policies were very effective at reducing the contribution of tailpipe emissions, they have led to an overall increase in the fractional contributions of non-tailpipe emissions to total OC in CELA (about 14%, 28%, and 28% in 2005, 2010 and 2015, respectively) and Riverside (22%, 27% and 26% in 2005, 2010 and 2015), underscoring the necessity to develop equally effective mitigation policies targeting non-tailpipe PM emissions.

Keywords: PM₂.₅, organic carbon, Los Angeles megacity, PMF, source apportionment, non-tailpipe emissions

Procedia PDF Downloads 195
799 A Comprehensive Study of Spread Models of Wildland Fires

Authors: Manavjit Singh Dhindsa, Ursula Das, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

These days, wildland fires, also known as forest fires, are more prevalent than ever. Wildfires have major repercussions that affect ecosystems, communities, and the environment in several ways. Wildfires lead to habitat destruction and biodiversity loss, affecting ecosystems and causing soil erosion. They also contribute to poor air quality by releasing smoke and pollutants that pose health risks, especially for individuals with respiratory conditions. Wildfires can damage infrastructure, disrupt communities, and cause economic losses. The economic impact of firefighting efforts, combined with their direct effects on forestry and agriculture, causes significant financial difficulties for the areas impacted. This research explores different forest fire spread models and presents a comprehensive review of various techniques and methodologies used in the field. A forest fire spread model is a computational or mathematical representation that is used to simulate and predict the behavior of a forest fire. By applying scientific concepts and data from empirical studies, these models attempt to capture the intricate dynamics of how a fire spreads, taking into consideration a variety of factors like weather patterns, topography, fuel types, and environmental conditions. These models assist authorities in understanding and forecasting the potential trajectory and intensity of a wildfire. Emphasizing the need for a comprehensive understanding of wildfire dynamics, this research explores the approaches, assumptions, and findings derived from various models. By using a comparison approach, a critical analysis is provided by identifying patterns, strengths, and weaknesses among these models. The purpose of the survey is to further wildfire research and management techniques. Decision-makers, researchers, and practitioners can benefit from the useful insights that are provided by synthesizing established information. Fire spread models provide insights into potential fire behavior, facilitating authorities to make informed decisions about evacuation activities, allocating resources for fire-fighting efforts, and planning for preventive actions. Wildfire spread models are also useful in post-wildfire mitigation strategies as they help in assessing the fire's severity, determining high-risk regions for post-fire dangers, and forecasting soil erosion trends. The analysis highlights the importance of customized modeling approaches for various circumstances and promotes our understanding of the way forest fires spread. Some of the known models in this field are Rothermel’s wildland fuel model, FARSITE, WRF-SFIRE, FIRETEC, FlamMap, FSPro, cellular automata model, and others. The key characteristics that these models consider include weather (includes factors such as wind speed and direction), topography (includes factors like landscape elevation), and fuel availability (includes factors like types of vegetation) among other factors. The models discussed are physics-based, data-driven, or hybrid models, also utilizing ML techniques like attention-based neural networks to enhance the performance of the model. In order to lessen the destructive effects of forest fires, this initiative aims to promote the development of more precise prediction tools and effective management techniques. The survey expands its scope to address the practical needs of numerous stakeholders. Access to enhanced early warning systems enables decision-makers to take prompt action. Emergency responders benefit from improved resource allocation strategies, strengthening the efficacy of firefighting efforts.

Keywords: artificial intelligence, deep learning, forest fire management, fire risk assessment, fire simulation, machine learning, remote sensing, wildfire modeling

Procedia PDF Downloads 79
798 Contextual Factors of Innovation for Improving Commercial Banks' Performance in Nigeria

Authors: Tomola Obamuyi

Abstract:

The banking system in Nigeria adopted innovative banking, with the aim of enhancing financial inclusion, and making financial services readily and cheaply available to majority of the people, and to contribute to the efficiency of the financial system. Some of the innovative services include: Automatic Teller Machines (ATMs), National Electronic Fund Transfer (NEFT), Point of Sale (PoS), internet (Web) banking, Mobile Money payment (MMO), Real-Time Gross Settlement (RTGS), agent banking, among others. The introduction of these payment systems is expected to increase bank efficiency and customers' satisfaction, culminating in better performance for the commercial banks. However, opinions differ on the possible effects of the various innovative payment systems on the performance of commercial banks in the country. Thus, this study empirically determines how commercial banks use innovation to gain competitive advantage in the specific context of Nigeria's finance and business. The study also analyses the effects of financial innovation on the performance of commercial banks, when different periods of analysis are considered. The study employed secondary data from 2009 to 2018, the period that witnessed aggressive innovation in the financial sector of the country. The Vector Autoregression (VAR) estimation technique forecasts the relative variance of each random innovation to the variables in the VAR, examine the effect of standard deviation shock to one of the innovations on current and future values of the impulse response and determine the causal relationship between the variables (VAR granger causality test). The study also employed the Multi-Criteria Decision Making (MCDM) to rank the innovations and the performance criteria of Return on Assets (ROA) and Return on Equity (ROE). The entropy method of MCDM was used to determine which of the performance criteria better reflect the contributions of the various innovations in the banking sector. On the other hand, the Range of Values (ROV) method was used to rank the contributions of the seven innovations to performance. The analysis was done based on medium term (five years) and long run (ten years) of innovations in the sector. The impulse response function derived from the VAR system indicated that the response of ROA to the values of cheques transaction, values of NEFT transactions, values of POS transactions was positive and significant in the periods of analysis. The paper also confirmed with entropy and range of value that, in the long run, both the CHEQUE and MMO performed best while NEFT was next in performance. The paper concluded that commercial banks would enhance their performance by continuously improving on the services provided through Cheques, National Electronic Fund Transfer and Point of Sale since these instruments have long run effects on their performance. This will increase the confidence of the populace and encourage more usage/patronage of these services. The banking sector will in turn experience better performance which will improve the economy of the country. Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression,

Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression

Procedia PDF Downloads 115
797 Deterministic and Stochastic Modeling of a Micro-Grid Management for Optimal Power Self-Consumption

Authors: D. Calogine, O. Chau, S. Dotti, O. Ramiarinjanahary, P. Rasoavonjy, F. Tovondahiniriko

Abstract:

Mafate is a natural circus in the north-western part of Reunion Island, without an electrical grid and road network. A micro-grid concept is being experimented in this area, composed of a photovoltaic production combined with electrochemical batteries, in order to meet the local population for self-consumption of electricity demands. This work develops a discrete model as well as a stochastic model in order to reach an optimal equilibrium between production and consumptions for a cluster of houses. The management of the energy power leads to a large linearized programming system, where the time interval of interest is 24 hours The experimental data are solar production, storage energy, and the parameters of the different electrical devices and batteries. The unknown variables to evaluate are the consumptions of the various electrical services, the energy drawn from and stored in the batteries, and the inhabitants’ planning wishes. The objective is to fit the solar production to the electrical consumption of the inhabitants, with an optimal use of the energies in the batteries by satisfying as widely as possible the users' planning requirements. In the discrete model, the different parameters and solutions of the linear programming system are deterministic scalars. Whereas in the stochastic approach, the data parameters and the linear programming solutions become random variables, then the distributions of which could be imposed or established by estimation from samples of real observations or from samples of optimal discrete equilibrium solutions.

Keywords: photovoltaic production, power consumption, battery storage resources, random variables, stochastic modeling, estimations of probability distributions, mixed integer linear programming, smart micro-grid, self-consumption of electricity.

Procedia PDF Downloads 105
796 Functionalized Nano porous Ceramic Membranes for Electrodialysis Treatment of Harsh Wastewater

Authors: Emily Rabe, Stephanie Candelaria, Rachel Malone, Olivia Lenz, Greg Newbloom

Abstract:

Electrodialysis (ED) is a well-developed technology for ion removal in a variety of applications. However, many industries generate harsh wastewater streams that are incompatible with traditional ion exchange membranes. Membrion® has developed novel ceramic-based ion exchange membranes (IEMs) offering several advantages over traditional polymer membranes: high performance in low pH, chemical resistance to oxidizers, and a rigid structure that minimizes swelling. These membranes are synthesized with our patented silane-based sol-gel techniques. The pore size, shape, and network structure are engineered through a molecular self-assembly process where thermodynamic driving forces are used to direct where and how pores form. Either cationic or anionic groups can be added within the membrane nanopore structure to create cation- and anion-exchange membranes. The ceramic IEMs are produced on a roll-to-roll manufacturing line with low-temperature processing. Membrane performance testing is conducted using in-house permselectivity, area-specific resistance, and ED stack testing setups. Ceramic-based IEMs show comparable performance to traditional IEMs and offer some unique advantages. Long exposure to highly acidic solutions has a negligible impact on ED performance. Additionally, we have observed stable performance in the presence of strong oxidizing agents such as hydrogen peroxide. This stability is expected, as the ceramic backbone of these materials is already in a fully oxidized state. This data suggests ceramic membranes, made using sol-gel chemistry, could be an ideal solution for acidic and/or oxidizing wastewater streams from processes such as semiconductor manufacturing and mining.

Keywords: ion exchange, membrane, silane chemistry, nanostructure, wastewater

Procedia PDF Downloads 83
795 Detect Critical Thinking Skill in Written Text Analysis. The Use of Artificial Intelligence in Text Analysis vs Chat/Gpt

Authors: Lucilla Crosta, Anthony Edwards

Abstract:

Companies and the market place nowadays struggle to find employees with adequate skills in relation to anticipated growth of their businesses. At least half of workers will need to undertake some form of up-skilling process in the next five years in order to remain aligned with the requests of the market . In order to meet these challenges, there is a clear need to explore the potential uses of AI (artificial Intelligence) based tools in assessing transversal skills (critical thinking, communication and soft skills of different types in general) of workers and adult students while empowering them to develop those same skills in a reliable trustworthy way. Companies seek workers with key transversal skills that can make a difference between workers now and in the future. However, critical thinking seems to be the one of the most imprtant skill, bringing unexplored ideas and company growth in business contexts. What employers have been reporting since years now, is that this skill is lacking in the majority of workers and adult students, and this is particularly visible trough their writing. This paper investigates how critical thinking and communication skills are currently developed in Higher Education environments through use of AI tools at postgraduate levels. It analyses the use of a branch of AI namely Machine Learning and Big Data and of Neural Network Analysis. It also examines the potential effect the acquisition of these skills through AI tools and what kind of effects this has on employability This paper will draw information from researchers and studies both at national (Italy & UK) and international level in Higher Education. The issues associated with the development and use of one specific AI tool Edulai, will be examined in details. Finally comparisons will be also made between these tools and the more recent phenomenon of Chat GPT and forthcomings and drawbacks will be analysed.

Keywords: critical thinking, artificial intelligence, higher education, soft skills, chat GPT

Procedia PDF Downloads 103
794 An Evolutionary Perspective on the Role of Extrinsic Noise in Filtering Transcript Variability in Small RNA Regulation in Bacteria

Authors: Rinat Arbel-Goren, Joel Stavans

Abstract:

Cell-to-cell variations in transcript or protein abundance, called noise, may give rise to phenotypic variability between isogenic cells, enhancing the probability of survival under stress conditions. These variations may be introduced by post-transcriptional regulatory processes such as non-coding, small RNAs stoichiometric degradation of target transcripts in bacteria. We study the iron homeostasis network in Escherichia coli, in which the RyhB small RNA regulates the expression of various targets as a model system. Using fluorescence reporter genes to detect protein levels and single-molecule fluorescence in situ hybridization to monitor transcripts levels in individual cells, allows us to compare noise at both transcript and protein levels. The experimental results and computer simulations show that extrinsic noise buffers through a feed-forward loop configuration the increase in variability introduced at the transcript level by iron deprivation, illuminating the important role that extrinsic noise plays during stress. Surprisingly, extrinsic noise also decouples of fluctuations of two different targets, in spite of RyhB being a common upstream factor degrading both. Thus, phenotypic variability increases under stress conditions by the decoupling of target fluctuations in the same cell rather than by increasing the noise of each. We also present preliminary results on the adaptation of cells to prolonged iron deprivation in order to shed light on the evolutionary role of post-transcriptional downregulation by small RNAs.

Keywords: cell-to-cell variability, Escherichia coli, noise, single-molecule fluorescence in situ hybridization (smFISH), transcript

Procedia PDF Downloads 160
793 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 125
792 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters

Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev

Abstract:

Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.

Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters

Procedia PDF Downloads 195
791 Gnss Aided Photogrammetry for Digital Mapping

Authors: Muhammad Usman Akram

Abstract:

This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.

Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry

Procedia PDF Downloads 22
790 Assessment of Physical Learning Environments in ECE: Interdisciplinary and Multivocal Innovation for Chilean Kindergartens

Authors: Cynthia Adlerstein

Abstract:

Physical learning environment (PLE) has been considered, after family and educators, as the third teacher. There have been conflicting and converging viewpoints on the role of the physical dimensions of places to learn, in facilitating educational innovation and quality. Despite the different approaches, PLE has been widely recognized as a key factor in the quality of the learning experience , and in the levels of learning achievement in ECE . The conceptual frameworks of the field assume that PLE consists of a complex web of factors that shape the overall conditions for learning, and that much more interdisciplinary and complementary methodologies of research and development are required. Although the relevance of PLE attracts a broad international consensus, in Chile it remains under-researched and weakly regulated by public policy. Gaining deeper contextual understanding and more thoughtfully-designed recommendations require the use of innovative assessment tools that cross cultural and disciplinary boundaries to produce new hybrid approaches and improvements. When considering a PLE-based change process for ECE improvement, a central question is what dimensions, variables and indicators could allow a comprehensive assessment of PLE in Chilean kindergartens? Based on a grounded theory social justice inquiry, we adopted a mixed method design, that enabled a multivocal and interdisciplinary construction of data. By using in-depth interviews, discussion groups, questionnaires, and documental analysis, we elicited the PLE discourses of politicians, early childhood practitioners, experts in architectural design and ergonomics, ECE stakeholders, and 3 to 5 year olds. A constant comparison method enabled the construction of the dimensions, variables and indicators through which PLE assessment is possible. Subsequently, the instrument was applied in a sample of 125 early childhood classrooms, to test reliability (internal consistency) and validity (content and construct). As a result, an interdisciplinary and multivocal tool for assessing physical learning environments was constructed and validated, for Chilean kindergartens. The tool is structured upon 7 dimensions (wellbeing, flexible, empowerment, inclusiveness, symbolically meaningful, pedagogically intentioned, institutional management) 19 variables and 105 indicators that are assessed through observation and registration on a mobile app. The overall reliability of the instrument is .938 while the consistency of each dimension varies between .773 (inclusive) and .946 (symbolically meaningful). The validation process through expert opinion and factorial analysis (chi-square test) has shown that the dimensions of the assessment tool reflect the factors of physical learning environments. The constructed assessment tool for kindergartens highlights the significance of the physical environment in early childhood educational settings. The relevance of the instrument relies in its interdisciplinary approach to PLE and in its capability to guide innovative learning environments, based on educational habitability. Though further analysis are required for concurrent validation and standardization, the tool has been considered by practitioners and ECE stakeholders as an intuitive, accessible and remarkable instrument to arise awareness on PLE and on equitable distribution of learning opportunities.

Keywords: Chilean kindergartens, early childhood education, physical learning environment, third teacher

Procedia PDF Downloads 356
789 Investigation of Different Machine Learning Algorithms in Large-Scale Land Cover Mapping within the Google Earth Engine

Authors: Amin Naboureh, Ainong Li, Jinhu Bian, Guangbin Lei, Hamid Ebrahimy

Abstract:

Large-scale land cover mapping has become a new challenge in land change and remote sensing field because of involving a big volume of data. Moreover, selecting the right classification method, especially when there are different types of landscapes in the study area is quite difficult. This paper is an attempt to compare the performance of different machine learning (ML) algorithms for generating a land cover map of the China-Central Asia–West Asia Corridor that is considered as one of the main parts of the Belt and Road Initiative project (BRI). The cloud-based Google Earth Engine (GEE) platform was used for generating a land cover map for the study area from Landsat-8 images (2017) by applying three frequently used ML algorithms including random forest (RF), support vector machine (SVM), and artificial neural network (ANN). The selected ML algorithms (RF, SVM, and ANN) were trained and tested using reference data obtained from MODIS yearly land cover product and very high-resolution satellite images. The finding of the study illustrated that among three frequently used ML algorithms, RF with 91% overall accuracy had the best result in producing a land cover map for the China-Central Asia–West Asia Corridor whereas ANN showed the worst result with 85% overall accuracy. The great performance of the GEE in applying different ML algorithms and handling huge volume of remotely sensed data in the present study showed that it could also help the researchers to generate reliable long-term land cover change maps. The finding of this research has great importance for decision-makers and BRI’s authorities in strategic land use planning.

Keywords: land cover, google earth engine, machine learning, remote sensing

Procedia PDF Downloads 111
788 Impact of Charging PHEV at Different Penetration Levels on Power System Network

Authors: M. R. Ahmad, I. Musirin, M. M. Othman, N. A. Rahmat

Abstract:

Plug-in Hybrid-Electric Vehicle (PHEV) has gained immense popularity in recent years. PHEV offers numerous advantages compared to the conventional internal-combustion engine (ICE) vehicle. Millions of PHEVs are estimated to be on the road in the USA by 2020. Uncoordinated PHEV charging is believed to cause severe impacts to the power grid; i.e. feeders, lines and transformers overload and voltage drop. Nevertheless, improper PHEV data model used in such studies may cause the findings of their works is in appropriated. Although smart charging is more attractive to researchers in recent years, its implementation is not yet attainable on the street due to its requirement for physical infrastructure readiness and technology advancement. As the first step, it is finest to study the impact of charging PHEV based on real vehicle travel data from National Household Travel Survey (NHTS) and at present charging rate. Due to the lack of charging station on the street at the moment, charging PHEV at home is the best option and has been considered in this work. This paper proposed a technique that comprehensively presents the impact of charging PHEV on power system networks considering huge numbers of PHEV samples with its traveling data pattern. Vehicles Charging Load Profile (VCLP) is developed and implemented in IEEE 30-bus test system that represents a portion of American Electric Power System (Midwestern US). Normalization technique is used to correspond to real time loads at all buses. Results from the study indicated that charging PHEV using opportunity charging will have significant impacts on power system networks, especially whereas bigger battery capacity (kWh) is used as well as for higher penetration level.

Keywords: plug-in hybrid electric vehicle, transportation electrification, impact of charging PHEV, electricity demand profile, load profile

Procedia PDF Downloads 283
787 WhatsApp as a Public Health Management Tool in India

Authors: Drishti Sharma, Mona Duggal

Abstract:

Background: WhatsApp can serve as a cost-effective, scalable, convenient, and popular medium for public health management related communication in the developing world where the existing system of communication is top-down and slow. The product supports sending and receiving a variety of media: text, photos, videos, documents, and location, as well as voice/video calls. With growing number of users of smartphones and improving access and penetration of internet, the scope of information technology remains immense in resolving the hurdles faced by traditional public health system. Poor infrastructure, gap in digital literacy, faulty documentation, strict organizational hierarchy and slow movement of information across desks and offices- all these, make WhatsApp an efficient prospect to complement the existing system for communication, feedback and leadership for public health system in India. Objective: This study investigates the benefits, challenges and limitations associated with WhatsApp usage as a public health management tool. Methods: The study was conducted within the Chandigarh Union Territory. We used a qualitative approach and conducted individual semi-structured interviews and group interviews (n = 10). Participants included medical officers (n 20), Program managers (n = 4), academicians (n=2) and administrators (n=2). Thematic and content qualitative analyses were conducted. Message log of the WhatsApp group of one of the health program was assessed. Results: Medical Officers said that WhatsApp helped them remain in touch with the program officer. They could easily give feedback and highlight those challenges which needed immediate intervention from the program managers, hence they felt supported. Also, the application helped them share pictures of their activities (meetings and field activities) with the group which they thought inspired others and gave themselves immense satisfaction. Also, it helped build stronger relationships and better coordination among themselves, the same being important in team events. For program managers, it had become a portal for coordinating large scale campaigns. Its reach and the fact that the feedback is real-time make WhatsApp ideal for district level events. Though the easy informal connectivity made them answerable to their staff but it also provided them with flexibility in operations. It turned out to be an important portal for sharing outcome and goals related feedback (both positive and negative) to the team. To be sure, using WhatsApp for the purpose of public health program presents considerable challenges, including technological barriers, organizational challenges, gender issues, confidentiality concerns and unplanned aftereffects. Nevertheless, its advantages in a low-cost setting make it an efficient alternative. Conclusion: WhatsApp has become an integral part of our lives. Use of this app for public health program management within closed groups looks promising and useful. At the same time, addressing the challenges involved would make its usage safer.

Keywords: communication, mobile technology, public health management, WhatsApp

Procedia PDF Downloads 172
786 Challenges and Recommendations for Medical Device Tracking and Traceability in Singapore: A Focus on Nursing Practices

Authors: Zhuang Yiwen

Abstract:

The paper examines the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. One of the major challenges identified is the lack of a standard coding system for medical devices, which makes it difficult to track them effectively. The paper suggests the use of the Unique Device Identifier (UDI) as a single standard for medical devices to improve tracking and reduce errors. The paper also explores the use of barcoding and image recognition to identify and document medical devices in nursing practices. In nursing practices, the use of barcodes for identifying medical devices is common. However, the information contained in these barcodes is often inconsistent, making it challenging to identify which segment contains the model identifier. Moreover, the use of barcodes may be improved with the use of UDI, but many subsidized accessories may still lack barcodes. The paper suggests that the readiness for UDI and barcode standardization requires standardized information, fields, and logic in electronic medical record (EMR), operating theatre (OT), and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. Nursing workflow and data flow also need to be taken into account. The paper also explores the use of image recognition, specifically the Tesseract OCR engine, to identify and document implants in public hospitals due to limitations in barcode scanning. The study found that the solution requires an implant information database and checking output against the database. The solution also requires customization of the algorithm, cropping out objects affecting text recognition, and applying adjustments. The solution requires additional resources and costs for a mobile/hardware device, which may pose space constraints and require maintenance of sterile criteria. The integration with EMR is also necessary, and the solution require changes in the user's workflow. The paper suggests that the long-term use of Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) as a supporting terminology to improve clinical documentation and data exchange in healthcare. SNOMED CT provides a standardized way of documenting and sharing clinical information with respect to procedure, patient and device documentation, which can facilitate interoperability and data exchange. In conclusion, the paper highlights the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. The paper suggests the use of UDI and barcode standardization to improve tracking and reduce errors. It also explores the use of image recognition to identify and document medical devices in nursing practices. The paper emphasizes the importance of standardized information, fields, and logic in EMR, OT, and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. These recommendations could help the Singapore healthcare system to improve tracking and traceability of medical devices and ultimately enhance patient safety.

Keywords: medical device tracking, unique device identifier, barcoding and image recognition, systematized nomenclature of medicine clinical terms

Procedia PDF Downloads 72
785 Toxic Masculinity as Dictatorship: Gender and Power Struggles in Tomás Eloy Martínez´s Novels

Authors: Mariya Dzhyoyeva

Abstract:

In the present paper, I examine manifestations of toxic masculinity in the novels by Tomás Eloy Martínez, a post-Boom author, journalist, literary critic, and one of the representatives of the Argentine writing diaspora. I focus on the analysis of Martínez´s characters that display hypermasculine traits to define the relationship between toxic masculinity and power, including the power of authorship and violence as they are represented in his novels. The analysis reveals a complex network in which gender, power, and violence are intertwined and influence and modify each other. As the author exposes toxic masculine behaviors that generate violence, he looks to undermine them. Departing from M. Kimmel´s idea of masculinity as homophobia, I examine how Martínez “outs” his characters by incorporating into the narrative some secret, privileged sources that provide alternative accounts of their otherwise hypermasculine lives. These background stories expose their “weaknesses,” both physical and mental, and thereby feminize them in their own eyes. In a similar way, the toxic masculinity of the fictional male author that wields his power by abusing the written word as he abuses the female character in the story is exposed as a complex of insecurities accumulated by the character due to his childhood trauma. The artistic technique that Martínez uses to condemn the authoritarian male behavior is accessing his subjectivity and subverting it through a multiplicity of identities. Martínez takes over the character’s “I” and turns it into a host of pronouns with a constantly shifting point of reference that distorts not only the notions of gender but also the very notion of identity. In doing so, he takes the character´s affirmation of masculinity to the limit where the very idea of it becomes unsustainable. Viewed in the context of Martínez´s own exilic story, the condemnation of toxic masculine power turns into the condemnation of dictatorship and authoritarianism.

Keywords: gender, masculinity., toxic masculinity, authoritarian, Argentine literature, Martínez

Procedia PDF Downloads 65
784 Comparison of Two Neural Networks To Model Margarine Age And Predict Shelf-Life Using Matlab

Authors: Phakamani Xaba, Robert Huberts, Bilainu Oboirien

Abstract:

The present study was aimed at developing & comparing two neural-network-based predictive models to predict shelf-life/product age of South African margarine using free fatty acid (FFA), water droplet size (D3.3), water droplet distribution (e-sigma), moisture content, peroxide value (PV), anisidine valve (AnV) and total oxidation (totox) value as input variables to the model. Brick margarine products which had varying ages ranging from fresh i.e. week 0 to week 47 were sourced. The brick margarine products which had been stored at 10 & 25 °C and were characterized. JMP and MATLAB models to predict shelf-life/ margarine age were developed and their performances were compared. The key performance indicators to evaluate the model performances were correlation coefficient (CC), root mean square error (RMSE), and mean absolute percentage error (MAPE) relative to the actual data. The MATLAB-developed model showed a better performance in all three performance indicators. The correlation coefficient of the MATLAB model was 99.86% versus 99.74% for the JMP model, the RMSE was 0.720 compared to 1.005 and the MAPE was 7.4% compared to 8.571%. The MATLAB model was selected to be the most accurate, and then, the number of hidden neurons/ nodes was optimized to develop a single predictive model. The optimized MATLAB with 10 neurons showed a better performance compared to the models with 1 & 5 hidden neurons. The developed models can be used by margarine manufacturers, food research institutions, researchers etc, to predict shelf-life/ margarine product age, optimize addition of antioxidants, extend shelf-life of products and proactively troubleshoot for problems related to changes which have an impact on shelf-life of margarine without conducting expensive trials.

Keywords: margarine shelf-life, predictive modelling, neural networks, oil oxidation

Procedia PDF Downloads 190
783 Efficient Compact Micro Dielectric Barrier Discharge (DBD) Plasma Reactor for Ozone Generation for Industrial Application in Liquid and Gas Phase Systems

Authors: D. Kuvshinov, A. Siswanto, J. Lozano-Parada, W. Zimmerman

Abstract:

Ozone is well known as a powerful fast reaction rate oxidant. The ozone based processes produce no by-product left as a non-reacted ozone returns back to the original oxygen molecule. Therefore an application of ozone is widely accepted as one of the main directions for a sustainable and clean technologies development. There are number of technologies require ozone to be delivered to specific points of a production network or reactors construction. Due to space constrains, high reactivity and short life time of ozone the use of ozone generators even of a bench top scale is practically limited. This requires development of mini/micro scale ozone generator which can be directly incorporated into production units. Our report presents a feasibility study of a new micro scale rector for ozone generation (MROG). Data on MROG calibration and indigo decomposition at different operation conditions are presented. At selected operation conditions with residence time of 0.25 s the process of ozone generation is not limited by reaction rate and the amount of ozone produced is a function of power applied. It was shown that the MROG is capable to produce ozone at voltage level starting from 3.5kV with ozone concentration of 5.28E-6 (mol/L) at 5kV. This is in line with data presented on numerical investigation for a MROG. It was shown that in compare to a conventional ozone generator, MROG has lower power consumption at low voltages and atmospheric pressure. The MROG construction makes it applicable for emerged and dry systems. With a robust compact design MROG can be used as incorporated unit for production lines of high complexity.

Keywords: dielectric barrier discharge (DBD), micro reactor, ozone, plasma

Procedia PDF Downloads 331
782 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification

Authors: Oumaima Khlifati, Khadija Baba

Abstract:

Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.

Keywords: distress pavement, hyperparameters, automatic classification, deep learning

Procedia PDF Downloads 85
781 A Critical Study on Unprecedented Employment Discrimination and Growth of Contractual Labour Engaged by Rail Industry in India

Authors: Munmunlisa Mohanty, K. D. Raju

Abstract:

Rail industry is one of the model employers in India has separate national legislation (Railways Act 1989) to regulate its vast employment structure, functioning across the country. Indian Railway is not only the premier transport industry of the country; indeed, it is Asia’s most extensive rail network organisation and the world’s second-largest industry functioning under one management. With the growth of globalization of industrial products, the scope of anti-employment discrimination is no more confined to gender aspect only; instead, it extended to the unregularized classification of labour force applicable in the various industrial establishments in India. And the Indian Rail Industry inadvertently enhanced such discriminatory employment trends by engaging contractual labour in an unprecedented manner. The engagement of contractual labour by rail industry vanished the core “Employer-Employee” relationship between rail management and contractual labour who employed through the contractor. This employment trend reduces the cost of production and supervision, discourages the contractual labour from forming unions, and reduces its collective bargaining capacity. So, the primary intention of this paper is to highlight the increasing discriminatory employment scope for contractual labour engaged by Indian Railways. This paper critically analyses the diminishing perspective of anti-employment opportunity practiced by Indian Railways towards contractual labour and demands an urgent outlook on the probable scope of anti-employment discrimination against contractual labour engaged by Indian Railways. The researcher used doctrinal methodology where primary materials (Railways Act, Contract Labour Act and Occupational, health and Safety Code, 2020) and secondary data (CAG Report 2018, Railways Employment Regulation Rules, ILO Report etc.) are used for the paper.

Keywords: anti-employment, CAG Report, contractual labour, discrimination, Indian Railway, principal employer

Procedia PDF Downloads 165
780 Academic Mobility within EU as a Voluntary or a Necessary Move: The Case of German Academics in the UK

Authors: Elena Samarsky

Abstract:

According to German national records and willingness to migrate surveys, emigration is much more attractive for better educated citizens employed in white-collar positions, with academics displaying the highest migration rate. The case study of academic migration from Germany is furthermore intriguing due to the country's financial power, competitive labour market and relatively good life-standards, working conditions and high wage rates. Investigation of such mobility challenges traditional economic view on migration, as it raises the question of why people chose to leave their highly-industrialized countries known for their high life-standards, stable political scene and prosperous economy. Within the regional domain, examining mobility of Germans contributes to the ongoing debate over the extent of influence of the EU mobility principle on migration decision. The latter is of particular interest, as it may shed the light on the extent to which it frames individual migration path, defines motivations and colours the experiences of migration action itself. The paper is based on the analysis of the migration decisions obtained through in-depth interviews with German academics employed in the UK. These retrospective interviews were conducted with German academies across selected universities in the UK, employed in a variety of academic fields, and different career stages. Interviews provide a detailed description of what motivated people to search for a post in another country, which attributes of such job are needed to be satisfied in order to facilitate migration, as well as general information on particularities of an academic career and institutions involved. In the course of the project, it became evident that although securing financial stability was non-negotiable factor in migration (e.g., work contract singed before relocation) non-pecuniary motivations played significant role as well. Migration narratives of this group - the highly skilled, whose human capital is transferable, and whose expertise is positively evaluated by countries, is mainly characterised by search for personal development and career advancement, rather than a direct increase in their income. Such records are also consistent in showing that in case of academics, scientific freedom and independence are the main attributes of a perfect job and are a substantial motivator. On the micro level, migration is rather depicted as an opportunistic action addressed in terms of voluntary and rather imposed decision. However, on the macro level, findings allow suggesting that such opportunities are rather an outcome embedded in the peculiarities of academia and its historical and structural developments. This, in turn, contributes significantly to emergence of a scene in which migration action takes place. The paper suggest further comparative research on the intersection of the macro and micro level, and in particular how both national academic institutions and the EU mobility principle shape migration of academics. In light of continuous attempts to make the European labour market more mobile and attractive such findings ought to have direct implications on policy.

Keywords: migration, EU, academics, highly skilled labour

Procedia PDF Downloads 254
779 Nondestructive Acoustic Microcharacterisation of Gamma Irradiation Effects on Sodium Oxide Borate Glass X2Na2O-X2B2O3 by Acoustic Signature

Authors: Ibrahim Al-Suraihy, Abdellaziz Doghmane, Zahia Hadjoub

Abstract:

We discuss in this work the elastic properties by using acoustic microscopes to measure Rayleigh and longitudinal wave velocities in a no radiated and radiated sodium borate glasses X2Na2O-X2B2O3 with 0 ≤ x ≤ 27 (mol %) at microscopic resolution. The acoustic material signatures were first measured, from which the characteristic surface velocities were determined.Longitudinal and shear ultrasonic velocities were measured in a different composition of sodium borate glass samples before and after irradiation with γ-rays. Results showed that the effect due to increasing sodium oxide content on the ultrasonic velocity appeared more clearly than due to γ-radiation. It was found that as Na2O composition increases, longitudinal velocities vary from 3832 to 5636 m/s in irradiated sample and it vary from 4010 to 5836 m/s in high radiated sample by 10 dose whereas shear velocities vary from 2223 to 3269 m/s in irradiated sample and it vary from 2326 m/s in low radiation to 3385 m/s in high radiated sample by 10 dose. The effect of increasing sodium oxide content on ultrasonic velocity was very clear. The increase of velocity was attributed to the gradual increase in the rigidity of glass and hence strengthening of network due to gradual change of boron atoms from the three-fold to the four-fold coordination of oxygen atoms. The ultrasonic velocities data of glass samples have been used to find the elastic modulus. It was found that ultrasonic velocity, elastic modulus and microhardness increase with increasing barium oxide content and increasing γ-radiation dose.

Keywords: mechanical properties X2Na2O-X2B2O3, acoustic signature, SAW velocities, additives, gamma-radiation dose

Procedia PDF Downloads 395
778 Re-Integrating Historic Lakes into the City Fabric in the Case of Vandiyur Lake, Madurai

Authors: Soumya Pugal

Abstract:

The traditional lake system of an ancient town is a network of water holding blue spaces, erected further than 2000 years ago by the rulers of ancient cities and maintained for centuries by the original communities. These blue spaces form a micro-watershed wherein an individual tank has its own catchment, tank bed area, and command area. These lakes are connected by a common sluice from the upstream tank, thereby feeding the downstream tank. The lakes used to be of socio-economic significance in those times, but the rapid growth of the city, as well as the change in systems of ownership of the lakes, have turned them into the backyard of urban development. Madurai is one such historic city to be facing the issues of finding a balance to the social, ecological, and profitable requirements of the people with respect to the traditional lake system. To find a solution to problems caused by the neglect of vital ecological systems of a city, the theory of transformative placemaking through water sensitive urban design has been explored. This approach re-invents the relationship between the people and the urban lakes to suit the modern aspirations while respecting the environment. The thesis aims to develop strategies to guide the development along the major urban lake of Vandiyur to equip the lake to meet the growing requirements of the megacity in terms of its recreational requirements and give a renewed connection between people and water. The intent of the design is to understand the ecological and social structures of the lake and find ways to use the lake to produce social cohesion within the community and balance the city's profitable and ecological requirements by using transformative placemaking through water sensitive urban design..

Keywords: urban lakes, urban blue spaces, placemaking, revitalisation of lakes, urban cohesion

Procedia PDF Downloads 70
777 The Image of Saddam Hussein and Collective Memory: The Semiotics of Ba'ath Regime's Mural in Iraq (1980-2003)

Authors: Maryam Pirdehghan

Abstract:

During the Ba'ath Party's rule in Iraq, propaganda was utilized to justify and to promote Saddam Hussein's image in the collective memory as the greatest Arab leader. Consequently, urban walls were routinely covered with images of Saddam. Relying on these images, the regime aimed to provide a basis for evoking meanings in the public opinion, which would supposedly strengthen Saddam’s power and reconstruct facts to legitimize his political ideology. Nonetheless, Saddam was not always portrayed with common and explicit elements but in certain periods of his rule, the paintings depicted him in an unusual context, where various historical and contemporary elements were combined in a narrative background. Therefore, an understanding of the implied socio-political references of these elements is required to fully elucidate the impact of these images on forming the memory and collective unconscious of the Iraqi people. To obtain such understanding, one needs to address the following questions: a) How Saddam Hussein is portrayed in mural during his rule? b) What of elements and mythical-historical narratives are found in the paintings? c) Which Saddam's political views were subject to the collective memory through mural? Employing visual semiotics, this study reveals that during Saddam Hussein's regime, the paintings were initially simple portraits but gradually transformed into narrative images, characterized by a complex network of historical, mythical and religious elements. These elements demonstrate the transformation of a secular-nationalist politician into a Muslim ruler who tried to instill three major policies in domestic and international relations i.e. the arabization of Iraq, as well as the propagation of pan-arabism ideology (first period), the implementation of anti-Israel policy (second period) and the implementation of anti-American-British policy (last period).

Keywords: Ba'ath Party, Saddam Hussein, mural, Iraq, propaganda, collective memory

Procedia PDF Downloads 322
776 The Use of Unmanned Aerial System (UAS) in Improving the Measurement System on the Example of Textile Heaps

Authors: Arkadiusz Zurek

Abstract:

The potential of using drones is visible in many areas of logistics, especially in terms of their use for monitoring and control of many processes. The technologies implemented in the last decade concern new possibilities for companies that until now have not even considered them, such as warehouse inventories. Unmanned aerial vehicles are no longer seen as a revolutionary tool for Industry 4.0, but rather as tools in the daily work of factories and logistics operators. The research problem is to develop a method for measuring the weight of goods in a selected link of the clothing supply chain by drones. However, the purpose of this article is to analyze the causes of errors in traditional measurements, and then to identify adverse events related to the use of drones for the inventory of a heap of textiles intended for production purposes. On this basis, it will be possible to develop guidelines to eliminate the causes of these events in the measurement process using drones. In a real environment, work was carried out to determine the volume and weight of textiles, including, among others, weighing a textile sample to determine the average density of the assortment, establishing a local geodetic network, terrestrial laser scanning and photogrammetric raid using an unmanned aerial vehicle. As a result of the analysis of measurement data obtained in the facility, the volume and weight of the assortment and the accuracy of their determination were determined. In this article, this work presents how such heaps are currently being tested, what adverse events occur, indicate and describes the current use of photogrammetric techniques of this type of measurements so far performed by external drones for the inventory of wind farms or construction of the station and compare them with the measurement system of the aforementioned textile heap inside a large-format facility.

Keywords: drones, unmanned aerial system, UAS, indoor system, security, process automation, cost optimization, photogrammetry, risk elimination, industry 4.0

Procedia PDF Downloads 81
775 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria

Authors: Isaac Kayode Ogunlade

Abstract:

Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.

Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device

Procedia PDF Downloads 86
774 We Have Never Seen a Dermatologist. Prisons Telederma Project Reaching the Unreachable Through Teledermatology

Authors: Innocent Atuhe, Babra Nalwadda, Grace Mulyowa, Annabella Habinka Ejiri

Abstract:

Background: Atopic Dermatitis (AD) is one of the most prevalent and growing chronic inflammatory skin diseases in African prisons. AD care is limited in African due to a lack of information about the disease amongst primary care workers, limited access to dermatologists, lack of proper training of healthcare workers, and shortage of appropriate treatments. We designed and implemented the Prisons Telederma project based on the recommendations of the International Society of Atopic Dermatitis. We aimed at; i) increase awareness and understanding of teledermatology among prison health workers and ii) improve treatment outcomes of prisoners with atopic dermatitis through increased access to and utilization of consultant dermatologists through teledermatology in Uganda prisons. Approach: We used Store-and-forward Teledermatology (SAF-TD) to increase access to dermatologist-led care for prisoners and prison staff with AD. We conducted five days of training for prison health workers using an adapted WHO training guide on recognizing neglected tropical diseases through changes on the skin together with an adapted American Academy of Dermatology (AAD) Childhood AD Basic Dermatology Curriculum designed to help trainees develop a clinical approach to the evaluation and initial management of patients with AD. This training was followed by blended e-learning, webinars facilitated by consultant Dermatologists with local knowledge of medication and local practices, apps adjusted for pigmented skin, WhatsApp group discussions, and sharing pigmented skin AD pictures and treatment via zoom meetings. We hired a team of Ugandan Senior Consultant dermatologists to draft an iconographic atlas of the main dermatoses in pigmented African skin and shared this atlas with prison health staff for use as a job aid. We had planned to use MySkinSelfie mobile phone application to take and share skin pictures of prisoners with AD with Consultant Dermatologists, who would review the pictures and prescribe appropriate treatment. Unfortunately, the National Health Service withdrew the app from the market due to technical issues. We monitored and evaluated treatment outcomes using the Patient-Oriented Eczema Measure (POEM) tool. We held four advocacy meetings to persuade relevant stakeholders to increase supplies and availability of first-line AD treatments such as emollients in prison health facilities. Results: We have the very first iconographic atlas of the main dermatoses in pigmented African skin. We increased; i) the proportion of prison health staff with adequate knowledge of AD and teledermatology from 20% to 80%; ii) the proportion of prisoners with AD reporting improvement in disease severity (POEM scores) from 25% to 35% in one year; iii) increased proportion of prisoners with AD seen by consultant dermatologist through teledermatology from 0% to 20% in one year and iv)Increased the availability of AD recommended treatments in prisons health facilities from 5% to 10% in one year. Our study contributes to the use, evaluation, and verification of the use of teledermatology to increase access to specialist dermatology services to the most hard to reach areas and vulnerable populations such as that of prisoners.

Keywords: teledermatology, prisoners, reaching, un-reachable

Procedia PDF Downloads 99
773 Understanding the Semantic Network of Tourism Studies in Taiwan by Using Bibliometrics Analysis

Authors: Chun-Min Lin, Yuh-Jen Wu, Ching-Ting Chung

Abstract:

The formulation of tourism policies requires objective academic research and evidence as support, especially research from local academia. Taiwan is a small island, and its economic growth relies heavily on tourism revenue. Taiwanese government has been devoting to the promotion of the tourism industry over the past few decades. Scientific research outcomes by Taiwanese scholars may and will help lay the foundations for drafting future tourism policy by the government. In this study, a total of 120 full journal articles published between 2008 and 2016 from the Journal of Tourism and Leisure Studies (JTSL) were examined to explore the scientific research trend of tourism study in Taiwan. JTSL is one of the most important Taiwanese journals in the tourism discipline which focuses on tourism-related issues and uses traditional Chinese as the study language. The method of co-word analysis from bibliometrics approaches was employed for semantic analysis in this study. When analyzing Chinese words and phrases, word segmentation analysis is a crucial step. It must be carried out initially and precisely in order to obtain meaningful word or word chunks for further frequency calculation. A word segmentation system basing on N-gram algorithm was developed in this study to conduct semantic analysis, and 100 groups of meaningful phrases with the highest recurrent rates were located. Subsequently, co-word analysis was employed for semantic classification. The results showed that the themes of tourism research in Taiwan in recent years cover the scope of tourism education, environmental protection, hotel management, information technology, and senior tourism. The results can give insight on the related issues and serve as a reference for tourism-related policy making and follow-up research.

Keywords: bibliometrics, co-word analysis, word segmentation, tourism research, policy

Procedia PDF Downloads 224