Search results for: classification algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5337

Search results for: classification algorithm

267 Enhancing Signal Reception in a Mobile Radio Network Using Adaptive Beamforming Antenna Arrays Technology

Authors: Ugwu O. C., Mamah R. O., Awudu W. S.

Abstract:

This work is aimed at enhancing signal reception on a mobile radio network and minimizing outage probability in a mobile radio network using adaptive beamforming antenna arrays. In this research work, an empirical real-time drive measurement was done in a cellular network of Globalcom Nigeria Limited located at Ikeja, the headquarters of Lagos State, Nigeria, with reference base station number KJA 004. The empirical measurement includes Received Signal Strength and Bit Error Rate which were recorded for exact prediction of the signal strength of the network as at the time of carrying out this research work. The Received Signal Strength and Bit Error Rate were measured with a spectrum monitoring Van with the help of a Ray Tracer at an interval of 100 meters up to 700 meters from the transmitting base station. The distance and angular location measurements from the reference network were done with the help Global Positioning System (GPS). The other equipment used were transmitting equipment measurements software (Temsoftware), Laptops and log files, which showed received signal strength with distance from the base station. Results obtained were about 11% from the real-time experiment, which showed that mobile radio networks are prone to signal failure and can be minimized using an Adaptive Beamforming Antenna Array in terms of a significant reduction in Bit Error Rate, which implies improved performance of the mobile radio network. In addition, this work did not only include experiments done through empirical measurement but also enhanced mathematical models that were developed and implemented as a reference model for accurate prediction. The proposed signal models were based on the analysis of continuous time and discrete space, and some other assumptions. These developed (proposed) enhanced models were validated using MATLAB (version 7.6.3.35) program and compared with the conventional antenna for accuracy. These outage models were used to manage the blocked call experience in the mobile radio network. 20% improvement was obtained when the adaptive beamforming antenna arrays were implemented on the wireless mobile radio network.

Keywords: beamforming algorithm, adaptive beamforming, simulink, reception

Procedia PDF Downloads 34
266 Comparison of Existing Predictor and Development of Computational Method for S- Palmitoylation Site Identification in Arabidopsis Thaliana

Authors: Ayesha Sanjana Kawser Parsha

Abstract:

S-acylation is an irreversible bond in which cysteine residues are linked to fatty acids palmitate (74%) or stearate (22%), either at the COOH or NH2 terminal, via a thioester linkage. There are several experimental methods that can be used to identify the S-palmitoylation site; however, since they require a lot of time, computational methods are becoming increasingly necessary. There aren't many predictors, however, that can locate S- palmitoylation sites in Arabidopsis Thaliana with sufficient accuracy. This research is based on the importance of building a better prediction tool. To identify the type of machine learning algorithm that predicts this site more accurately for the experimental dataset, several prediction tools were examined in this research, including the GPS PALM 6.0, pCysMod, GPS LIPID 1.0, CSS PALM 4.0, and NBA PALM. These analyses were conducted by constructing the receiver operating characteristics plot and the area under the curve score. An AI-driven deep learning-based prediction tool has been developed utilizing the analysis and three sequence-based input data, such as the amino acid composition, binary encoding profile, and autocorrelation features. The model was developed using five layers, two activation functions, associated parameters, and hyperparameters. The model was built using various combinations of features, and after training and validation, it performed better when all the features were present while using the experimental dataset for 8 and 10-fold cross-validations. While testing the model with unseen and new data, such as the GPS PALM 6.0 plant and pCysMod mouse, the model performed better, and the area under the curve score was near 1. It can be demonstrated that this model outperforms the prior tools in predicting the S- palmitoylation site in the experimental data set by comparing the area under curve score of 10-fold cross-validation of the new model with the established tools' area under curve score with their respective training sets. The objective of this study is to develop a prediction tool for Arabidopsis Thaliana that is more accurate than current tools, as measured by the area under the curve score. Plant food production and immunological treatment targets can both be managed by utilizing this method to forecast S- palmitoylation sites.

Keywords: S- palmitoylation, ROC PLOT, area under the curve, cross- validation score

Procedia PDF Downloads 68
265 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 76
264 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea

Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim

Abstract:

Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.

Keywords: deep learning, algae concentration, remote sensing, satellite

Procedia PDF Downloads 182
263 A Qualitative Study of Experienced Early Childhood Teachers Resolving Workplace Challenges with Character Strengths

Authors: Michael J. Haslip

Abstract:

Character strength application improves performance and well-being in adults across industries, but the potential impact of character strength training among early childhood educators is mostly unknown. To explore how character strengths are applied by early childhood educators at work, a qualitative study was completed alongside professional development provided to a group of in-service teachers of children ages 0-5 in Philadelphia, Pennsylvania, United States. Study participants (n=17) were all female. The majority of participants were non-white, in full-time lead or assistant teacher roles, had at least ten years of experience and a bachelor’s degree. Teachers were attending professional development weekly for 2 hours over a 10-week period on the topic of social and emotional learning and child guidance. Related to this training were modules and sessions on identifying a teacher’s character strength profile using the Values in Action classification of 24 strengths (e.g., humility, perseverance) that have a scientific basis. Teachers were then asked to apply their character strengths to help resolve current workplace challenges. This study identifies which character strengths the teachers reported using most frequently and the nature of the workplace challenges being resolved in this context. The study also reports how difficult these challenges were to the teachers and their success rate at resolving workplace challenges using a character strength application plan. The study also documents how teachers’ own use of character strengths relates to their modeling of these same traits (e.g., kindness, teamwork) for children, especially when the nature of the workplace challenge directly involves the children, such as when addressing issues of classroom management and behavior. Data were collected on action plans (reflective templates) which teachers wrote to explain the work challenge they were facing, the character strengths they used to address the challenge, their plan for applying strengths to the challenge, and subsequent results. Content analysis and thematic analysis were used to investigate the research questions using approaches that included classifying, connecting, describing, and interpreting data reported by educators. Findings reveal that teachers most frequently use kindness, leadership, fairness, hope, and love to address a range of workplace challenges, ranging from low to high difficulty, involving children, coworkers, parents, and for self-management. Teachers reported a 71% success rate at fully or mostly resolving workplace challenges using the action plan method introduced during professional development. Teachers matched character strengths to challenges in different ways, with certain strengths being used mostly when the challenge involved children (love, forgiveness), others mostly with adults (bravery, teamwork), and others universally (leadership, kindness). Furthermore, teacher’s application of character strengths at work involved directly modeling character for children in 31% of reported cases. The application of character strengths among early childhood educators may play a significant role in improving teacher well-being, reducing job stress, and improving efforts to model character for young children.

Keywords: character strengths, positive psychology, professional development, social-emotional learning

Procedia PDF Downloads 103
262 Drone Swarm Routing and Scheduling for Off-shore Wind Turbine Blades Inspection

Authors: Mohanad Al-Behadili, Xiang Song, Djamila Ouelhadj, Alex Fraess-Ehrfeld

Abstract:

In off-shore wind farms, turbine blade inspection accessibility under various sea states is very challenging and greatly affects the downtime of wind turbines. Maintenance of any offshore system is not an easy task due to the restricted logistics and accessibility. The multirotor unmanned helicopter is of increasing interest in inspection applications due to its manoeuvrability and payload capacity. These advantages increase when many of them are deployed simultaneously in a swarm. Hence this paper proposes a drone swarm framework for inspecting offshore wind turbine blades and nacelles so as to reduce downtime. One of the big challenges of this task is that when operating a drone swarm, an individual drone may not have enough power to fly and communicate during missions and it has no capability of refueling due to its small size. Once the drone power is drained, there are no signals transmitted and the links become intermittent. Vessels equipped with 5G masts and small power units are utilised as platforms for drones to recharge/swap batteries. The research work aims at designing a smart energy management system, which provides automated vessel and drone routing and recharging plans. To achieve this goal, a novel mathematical optimisation model is developed with the main objective of minimising the number of drones and vessels, which carry the charging stations, and the downtime of the wind turbines. There are a number of constraints to be considered, such as each wind turbine must be inspected once and only once by one drone; each drone can inspect at most one wind turbine after recharging, then fly back to the charging station; collision should be avoided during the drone flying; all wind turbines in the wind farm should be inspected within the given time window. We have developed a real-time Ant Colony Optimisation (ACO) algorithm to generate real-time and near-optimal solutions to the drone swarm routing problem. The schedule will generate efficient and real-time solutions to indicate the inspection tasks, time windows, and the optimal routes of the drones to access the turbines. Experiments are conducted to evaluate the quality of the solutions generated by ACO.

Keywords: drone swarm, routing, scheduling, optimisation model, ant colony optimisation

Procedia PDF Downloads 254
261 A Model for Teaching Arabic Grammar in Light of the Common European Framework of Reference for Languages

Authors: Erfan Abdeldaim Mohamed Ahmed Abdalla

Abstract:

The complexity of Arabic grammar poses challenges for learners, particularly in relation to its arrangement, classification, abundance, and bifurcation. The challenge at hand is a result of the contextual factors that gave rise to the grammatical rules in question, as well as the pedagogical approach employed at the time, which was tailored to the needs of learners during that particular historical period. Consequently, modern-day students encounter this same obstacle. This requires a thorough examination of the arrangement and categorization of Arabic grammatical rules based on particular criteria, as well as an assessment of their objectives. Additionally, it is necessary to identify the prevalent and renowned grammatical rules, as well as those that are infrequently encountered, obscure and disregarded. This paper presents a compilation of grammatical rules that require arrangement and categorization in accordance with the standards outlined in the Common European Framework of Reference for Languages (CEFR). In addition to facilitating comprehension of the curriculum, accommodating learners' requirements, and establishing the fundamental competencies for achieving proficiency in Arabic, it is imperative to ascertain the conventions that language learners necessitate in alignment with explicitly delineated benchmarks such as the CEFR criteria. The aim of this study is to reduce the quantity of grammatical rules that are typically presented to non-native Arabic speakers in Arabic textbooks. This reduction is expected to enhance the motivation of learners to continue their Arabic language acquisition and to approach the level of proficiency of native speakers. The primary obstacle faced by learners is the intricate nature of Arabic grammar, which poses a significant challenge in the realm of study. The proliferation and complexity of regulations evident in Arabic language textbooks designed for individuals who are not native speakers is noteworthy. The inadequate organisation and delivery of the material create the impression that the grammar is being imparted to a student with the intention of memorising "Alfiyyat-Ibn-Malik." Consequently, the sequence of grammatical rules instruction was altered, with rules originally intended for later instruction being presented first and those intended for earlier instruction being presented subsequently. Students often focus on learning grammatical rules that are not necessarily required while neglecting the rules that are commonly used in everyday speech and writing. Non-Arab students are taught Arabic grammar chapters that are infrequently utilised in Arabic literature and may be a topic of debate among grammarians. The aforementioned findings are derived from the statistical analysis and investigations conducted by the researcher, which will be disclosed in due course of the research. To instruct non-Arabic speakers on grammatical rules, it is imperative to discern the most prevalent grammatical frameworks in grammar manuals and linguistic literature (study sample). The present proposal suggests the allocation of grammatical structures across linguistic levels, taking into account the guidelines of the CEFR, as well as the grammatical structures that are necessary for non-Arabic-speaking learners to generate a modern, cohesive, and comprehensible language.

Keywords: grammar, Arabic, functional, framework, problems, standards, statistical, popularity, analysis

Procedia PDF Downloads 84
260 Concept of Tourist Village on Kampung Karaton of Karaton Kasunanan Surakarta, Central Java, Indonesia

Authors: Naniek Widayati Priyomarsono

Abstract:

Introduction: In beginning of Karaton formation, namely, era of Javanese kingdom town had the power region outside castle town (called as Mancanegara), settlement of karaton can function as “the space-between” and “space-defense”, besides it was one of components from governmental structure and karaton power at that time (internal servant/abdi dalem and sentana dalem). Upon the Independence of Indonesia in 1945 “Kingdom-City” converted its political status into part of democratic town managed by statutes based on the classification. The latter affects local culture hierarchy alteration due to the physical development and events. Dynamics of social economy activities in Kampung Karaton surrounded by buildings of Complex of Karaton Kasunanan ini, have impact on the urban system disturbed into the región. Also cultural region image fades away with the weak visual access from existant cultural artefacts. That development lacks of giving appreciation to the established region image providing identity of Karaton Kasunanan particularly and identity of Surakarta city in general. Method used is strategy of grounded theory research (research providing strong base of a theory). Research is focused on actors active and passive relevantly getting involved in change process of Karaton settlement. Data accumulated is “Investigation Focus” oriented on actors affecting that change either internal or external. Investigation results are coupled with field observation data, documentation, literature study, thus it takes accurate findings. Findings: Karaton village has potential products as attraction, possessing human resource support, strong motivation from society still living in that settlement, possessing facilities and means supports, tourism event-supporting facilities, cultural art institution, available fields or development area. Data analyzed: To get the expected result it takes restoration in social cultural development direction, and economy, with ways of: Doing social cultural development strategy, economy, and politics. To-do steps are program socialization of Karaton village as Tourism Village, economical development of local society, regeneration pattern, filtering, and selection of tourism development, integrated planning system development, development with persuasive approach, regulation, market mechanism, social cultural event sector development, political development for region activity sector. Summary: In case the restoration is done by getting society involved as subject of that settlement (active participation in the field), managed and packed interestingly and naturally with tourism-supporting facilities development, village of Karaton Kasunanan Surakarta is ready to receive visit of domestic and foreign tourists.

Keywords: karaton village, finding, restoration, economy, Indonesia

Procedia PDF Downloads 435
259 Optimized Scheduling of Domestic Load Based on User Defined Constraints in a Real-Time Tariff Scenario

Authors: Madia Safdar, G. Amjad Hussain, Mashhood Ahmad

Abstract:

One of the major challenges of today’s era is peak demand which causes stress on the transmission lines and also raises the cost of energy generation and ultimately higher electricity bills to the end users, and it was used to be managed by the supply side management. However, nowadays this has been withdrawn because of existence of potential in the demand side management (DSM) having its economic and- environmental advantages. DSM in domestic load can play a vital role in reducing the peak load demand on the network provides a significant cost saving. In this paper the potential of demand response (DR) in reducing the peak load demands and electricity bills to the electric users is elaborated. For this purpose the domestic appliances are modeled in MATLAB Simulink and controlled by a module called energy management controller. The devices are categorized into controllable and uncontrollable loads and are operated according to real-time tariff pricing pattern instead of fixed time pricing or variable pricing. Energy management controller decides the switching instants of the controllable appliances based on the results from optimization algorithms. In GAMS software, the MILP (mixed integer linear programming) algorithm is used for optimization. In different cases, different constraints are used for optimization, considering the comforts, needs and priorities of the end users. Results are compared and the savings in electricity bills are discussed in this paper considering real time pricing and fixed tariff pricing, which exhibits the existence of potential to reduce electricity bills and peak loads in demand side management. It is seen that using real time pricing tariff instead of fixed tariff pricing helps to save in the electricity bills. Moreover the simulation results of the proposed energy management system show that the gained power savings lie in high range. It is anticipated that the result of this research will prove to be highly effective to the utility companies as well as in the improvement of domestic DR.

Keywords: controllable and uncontrollable domestic loads, demand response, demand side management, optimization, MILP (mixed integer linear programming)

Procedia PDF Downloads 299
258 DNA Barcoding for Identification of Dengue Vectors from Assam and Arunachal Pradesh: North-Eastern States in India

Authors: Monika Soni, Shovonlal Bhowmick, Chandra Bhattacharya, Jitendra Sharma, Prafulla Dutta, Jagadish Mahanta

Abstract:

Aedes aegypti and Aedes albopictus are considered as two major vectors to transmit dengue virus. In North-east India, two states viz. Assam and Arunachal Pradesh are known to be high endemic zone for dengue and Chikungunya viral infection. The taxonomical classification of medically important vectors are important for mapping of actual evolutionary trends and epidemiological studies. However, misidentification of mosquito species in field-collected mosquito specimens could have a negative impact which may affect vector-borne disease control policy. DNA barcoding is a prominent method to record available species, differentiate from new addition and change of population structure. In this study, a combined approach of a morphological and molecular technique of DNA barcoding was adopted to explore sequence variation in mitochondrial cytochrome c oxidase subunit I (COI) gene within dengue vectors. The study has revealed the map distribution of the dengue vector from two states i.e. Assam and Arunachal Pradesh, India. Approximate five hundred mosquito specimens were collected from different parts of two states, and their morphological features were compared with the taxonomic keys. The analysis of detailed taxonomic study revealed identification of two species Aedes aegypti and Aedes albopictus. The species aegypti comprised of 66.6% of the specimen and represented as dominant dengue vector species. The sequences obtained through standard DNA barcoding protocol were compared with public databases, viz. GenBank and BOLD. The sequences of all Aedes albopictus have shown 100% similarity whereas sequence of Aedes aegypti has shown 99.77 - 100% similarity of COI gene with that of different geographically located same species based on BOLD database search. From dengue prevalent different geographical regions fifty-nine sequences were retrieved from NCBI and BOLD databases of the same and related taxa to determine the evolutionary distance model based on the phylogenetic analysis. Neighbor-Joining (NJ) and Maximum Likelihood (ML) phylogenetic tree was constructed in MEGA6.06 software with 1000 bootstrap replicates using Kimura-2-Parameter model. Data were analyzed for sequence divergence and found that intraspecific divergence ranged from 0.0 to 2.0% and interspecific divergence ranged from 11.0 to 12.0%. The transitional and transversional substitutions were tested individually. The sequences were deposited in NCBI: GenBank database. This observation claimed the first DNA barcoding analysis of Aedes mosquitoes from North-eastern states in India and also confirmed the range expansion of two important mosquito species. Overall, this study insight into the molecular ecology of the dengue vectors from North-eastern India which will enhance the understanding to improve the existing entomological surveillance and vector incrimination program.

Keywords: COI, dengue vectors, DNA barcoding, molecular identification, North-east India, phylogenetics

Procedia PDF Downloads 297
257 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 112
256 Outdoor Thermal Comfort Strategies: The Case of Cool Facades

Authors: Noelia L. Alchapar, Cláudia C. Pezzuto, Erica N. Correa

Abstract:

Mitigating urban overheating is key to achieving the environmental and energy sustainability of cities. The management of the optical properties of the materials that make up the urban envelope -roofing, pavement, and facades- constitutes a profitable and effective tool to improve the urban microclimate and rehabilitate urban areas. Each material that makes up the urban envelope has a different capacity to reflect received solar radiation, which alters the fraction of solar radiation absorbed by the city. However, the paradigm of increasing solar reflectance in all areas of the city without distinguishing their relative position within the urban canyon can cause serious problems of overheating and discomfort among its inhabitants. The hypothesis that supports the research postulates that not all reflective technologies that contribute to urban radiative cooling favor the thermal comfort conditions of pedestrians to equal measure. The objective of this work is to determine to what degree the management of the optical properties of the facades modifies outdoor thermal comfort, given that the mitigation potential of materials with high reflectance in facades is strongly conditioned by geographical variables and by the geometric characteristics of the urban profile aspect ratio (H/W). This research was carried out under two climatic contexts, that of the city of Mendoza-Argentina and that of the city of Campinas-Brazil, according to the Köppen climate classification: BWk and Cwa, respectively. Two areas in two different climatic contexts (Mendoza - Argentina and Campinas - Brazil) were selected. Both areas have comparable urban morphology patterns. These areas are located in a region with low horizontal building density and residential zoning. The microclimatic conditions were monitored during the summer period with temperature and humidity fixed sensors inside vial channels. The microclimate model was simulated in ENVI-Met V5. A grid resolution of 3.5 x 3.5 x 3.5m was used for both cities, totaling an area of 145x145x30 grids. Based on the validated theoretical model, ten scenarios were simulated, modifying the height of buildings and the solar reflectivity of facades. The solar reflectivity façades ranges were: low (0.3) and high (0.75). The density scenarios range from 1th to the 5th level. The study scenarios' performance was assessed by comparing the air temperature, physiological equivalent temperature (PET), and thermal climate index (UTCI). As a result, it is observed that the behavior of the materials of the urban outdoor space depends on complex interactions. Many urban environmental factors influence including constructive characteristics, urban morphology, geographic locations, local climate, and so forth. The role of the vertical urban envelope is decisive for the reduction of urban overheating. One of the causes of thermal gain is the multiple reflections within the urban canyon, which affects not only the air temperature but also the pedestrian thermal comfort. One of the main findings of this work leads to the remarkable importance of considering both the urban warming and the thermal comfort aspects of pedestrians in urban mitigation strategies.

Keywords: materials facades, solar reflectivity, thermal comfort, urban cooling

Procedia PDF Downloads 90
255 Challenges for Persons with Disabilities During COVID-19 Pandemic in Thailand

Authors: Tavee Cheausuwantavee

Abstract:

: COVID-19 pandemic significantly has impacted everyone’s life. Persons with disabilities (PWDs) in Thailand have been also effected by COVID-19 situation in many aspects of their lives, while there have been no more appropriate services of the government and providers. Research projects had been only focused on health precaution and protection. Rapid need assessments on populations and vulnerable groups were limited and conducted via social media and an online survey. However, little is known about the real problems and needs of Thai PWDs during the COVID-19 pandemic for an effective plan and integral services for those PWDs. Therefore, this study aims to explore the diverse problems and needs of Thai PWDs in the COVID -19 pandemic. Results from the study can be used by the government and other stakeholders for further effective services. Methods: This study was used a mixed-method design that consisted of both quantitative and qualitative measures. In terms of the quantitative approach, there were 744 PWDs and caregivers of all types of PWDs selected by proportional multistage stratified random sampling according to their disability classification and geographic location. Questionnaires with 59 items regarding participant characteristics, problems, and needs in health, education, employment, and other social inclusion, were distributed to all participants and some caregivers completed questionnaires when PWDs were not able to due to limited communication and/or literacy skills. Completed questionnaires were analyzed by descriptive statistics. For qualitative design, 62 key informants who were PWDs or caregivers were selected by purposive sampling. Ten focus groups, each consisting of 5-6 participants and 7 in-depth interviews from all the groups identified above, were conducted by researchers across five regions. Focus group and in-depth interview guidelines with 6 items regarding problems and needs in health, education, employment, other social inclusion, and their coping during COVID -19 pandemic. Data were analyzed using a modification of thematic content analysis. Results: Both quantitative and qualitative studies showed that PWDs and their caregivers had significant problems and needs all aspects of their life, including income and employment opportunity, daily living and social inclusion, health, and education, respectively. These problems and needs were related to each other, forming a vicious cycle. Participants also learned from negative pandemic to more positive life aspects, including their health protection, financial plan, family cohesion, and virtual technology literacy and innovation. Conclusion and implications: There have been challenges facing all life aspects of PWDs in Thailand during the COVID -19 pandemic, particularly incomes and daily living. All challenges have been the vicious cycle and complicated. There have been also a positive lesson learned of participants from the pandemic. Recommendations for government and stakeholders in the COVID-19 pandemic for PWDs are the following. First, the health protection strategy and policy of PWDs should be promoted together with other quality of life development including income generation, education and social inclusion. Second, virtual technology and alternative innovation should be enhanced for proactive service providers. Third, accessible information during the pandemic for all PWDs must be concerned. Forth, lesson learned from the pandemic should be shared and disseminated for crisis preparation and a positive mindset in the disruptive world.

Keywords: challenge, COVID-19, disability, Thailand

Procedia PDF Downloads 74
254 Nurse-Led Codes: Practical Application in the Emergency Department during a Global Pandemic

Authors: F. DelGaudio, H. Gill

Abstract:

Resuscitation during cardiopulmonary (CPA) arrest is dynamic, high stress, high acuity situation, which can easily lead to communication breakdown, and errors. The care of these high acuity patients has also been shown to increase physiologic stress and task saturation of providers, which can negatively impact the care being provided. These difficulties are further complicated during a global pandemic and pose a significant safety risk to bedside providers. Nurse-led codes are a relatively new concept that may be a potential solution for alleviating some of these difficulties. An experienced nurse who has completed advanced cardiac life support (ACLS), and additional training, assumed the responsibility of directing the mechanics of the appropriate ACLS algorithm. This was done in conjunction with a physician who also acted as a physician leader. The additional nurse-led code training included a multi-disciplinary in situ simulation of a CPA on a suspected COVID-19 patient. During the CPA, the nurse leader’s responsibilities include: ensuring adequate compression depth and rate, minimizing interruptions in chest compressions, the timing of rhythm/pulse checks, and appropriate medication administration. In addition, the nurse leader also functions as a last line safety check for appropriate personal protective equipment and limiting exposure of staff. The use of nurse-led codes for CPA has shown to decrease the cognitive overload and task saturation for the physician, as well as limiting the number of staff being exposed to a potentially infectious patient. The real-world application has allowed physicians to perform and oversee high-risk procedures such as intubation, line placement, and point of care ultrasound, without sacrificing the integrity of the resuscitation. Nurse-led codes have also given the physician the bandwidth to review pertinent medical history, advanced directives, determine reversible causes, and have the end of life conversations with family. While there is a paucity of research on the effectiveness of nurse-led codes, there are many potentially significant benefits. In addition to its value during a pandemic, it may also be beneficial during complex circumstances such as extracorporeal cardiopulmonary resuscitation.

Keywords: cardiopulmonary arrest, COVID-19, nurse-led code, task saturation

Procedia PDF Downloads 149
253 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: deep learning, long short term memory, energy, renewable energy load forecasting

Procedia PDF Downloads 260
252 Extracting Opinions from Big Data of Indonesian Customer Reviews Using Hadoop MapReduce

Authors: Veronica S. Moertini, Vinsensius Kevin, Gede Karya

Abstract:

Customer reviews have been collected by many kinds of e-commerce websites selling products, services, hotel rooms, tickets and so on. Each website collects its own customer reviews. The reviews can be crawled, collected from those websites and stored as big data. Text analysis techniques can be used to analyze that data to produce summarized information, such as customer opinions. Then, these opinions can be published by independent service provider websites and used to help customers in choosing the most suitable products or services. As the opinions are analyzed from big data of reviews originated from many websites, it is expected that the results are more trusted and accurate. Indonesian customers write reviews in Indonesian language, which comes with its own structures and uniqueness. We found that most of the reviews are expressed with “daily language”, which is informal, do not follow the correct grammar, have many abbreviations and slangs or non-formal words. Hadoop is an emerging platform aimed for storing and analyzing big data in distributed systems. A Hadoop cluster consists of master and slave nodes/computers operated in a network. Hadoop comes with distributed file system (HDFS) and MapReduce framework for supporting parallel computation. However, MapReduce has weakness (i.e. inefficient) for iterative computations, specifically, the cost of reading/writing data (I/O cost) is high. Given this fact, we conclude that MapReduce function is best adapted for “one-pass” computation. In this research, we develop an efficient technique for extracting or mining opinions from big data of Indonesian reviews, which is based on MapReduce with one-pass computation. In designing the algorithm, we avoid iterative computation and instead adopt a “look up table” technique. The stages of the proposed technique are: (1) Crawling the data reviews from websites; (2) cleaning and finding root words from the raw reviews; (3) computing the frequency of the meaningful opinion words; (4) analyzing customers sentiments towards defined objects. The experiments for evaluating the performance of the technique were conducted on a Hadoop cluster with 14 slave nodes. The results show that the proposed technique (stage 2 to 4) discovers useful opinions, is capable of processing big data efficiently and scalable.

Keywords: big data analysis, Hadoop MapReduce, analyzing text data, mining Indonesian reviews

Procedia PDF Downloads 199
251 Computer-Aided Drug Repurposing for Mycobacterium Tuberculosis by Targeting Tryptophanyl-tRNA Synthetase

Authors: Neslihan Demirci, Serdar Durdağı

Abstract:

Mycobacterium tuberculosis is still a worldwide disease-causing agent that, according to WHO, led to the death of 1.5 million people from tuberculosis (TB) in 2020. The bacteria reside in macrophages located specifically in the lung. There is a known quadruple drug therapy regimen for TB consisting of isoniazid (INH), rifampin (RIF), pyrazinamide (PZA), and ethambutol (EMB). Over the past 60 years, there have been great contributions to treatment options, such as recently approved delamanid (OPC67683) and bedaquiline (TMC207/R207910), targeting mycolic acid and ATP synthesis, respectively. Also, there are natural compounds that can block the tryptophanyl-tRNA synthetase (TrpRS) enzyme, chuangxinmycin, and indolmycin. Yet, already the drug resistance is reported for those agents. In this study, the newly released TrpRS enzyme structure is investigated for potential inhibitor drugs from already synthesized molecules to help the treatment of resistant cases and to propose an alternative drug for the quadruple drug therapy of tuberculosis. Maestro, Schrodinger is used for docking and molecular dynamic simulations. In-house library containing ~8000 compounds among FDA-approved indole-containing compounds, a total of 57 obtained from the ChemBL were used for both ATP and tryptophan binding pocket docking. Best of indole-containing 57 compounds were subjected to hit expansion and compared later with virtual screening workflow (VSW) results. After docking, VSW was done. Glide-XP docking algorithm was chosen. When compared, VSW alone performed better than the hit expansion module. Best scored compounds were kept for ten ns molecular dynamic simulations by Desmond. Further, 100 ns molecular dynamic simulation was performed for elected molecules according to Z-score. The top three MMGBSA-scored compounds were subjected to steered molecular dynamic (SMD) simulations by Gromacs. While SMD simulations are still being conducted, ponesimod (for multiple sclerosis), vilanterol (β₂ adrenoreceptor agonist), and silodosin (for benign prostatic hyperplasia) were found to have a significant affinity for tuberculosis TrpRS, which is the propulsive force for the urge to expand the research with in vitro studies. Interestingly, top-scored ponesimod has been reported to have a side effect that makes the patient prone to upper respiratory tract infections.

Keywords: drug repurposing, molecular dynamics, tryptophanyl-tRNA synthetase, tuberculosis

Procedia PDF Downloads 118
250 Use of End-Of-Life Footwear Polymer EVA (Ethylene Vinyl Acetate) and PU (Polyurethane) for Bitumen Modification

Authors: Lucas Nascimento, Ana Rita, Margarida Soares, André Ribeiro, Zlatina Genisheva, Hugo Silva, Joana Carvalho

Abstract:

The footwear industry is an essential fashion industry, focusing on producing various types of footwear, such as shoes, boots, sandals, sneakers, and slippers. Global footwear consumption has doubled every 20 years since the 1950s. It is estimated that in 1950, each person consumed one new pair of shoes yearly; by 2005, over 20 billion pairs of shoes were consumed. To meet global footwear demand, production reached $24.2 billion, equivalent to about $74 per person in the United States. This means three new pairs of shoes per person worldwide. The issue of footwear waste is related to the fact that shoe production can generate a large amount of waste, much of which is difficult to recycle or reuse. This waste includes scraps of leather, fabric, rubber, plastics, toxic chemicals, and other materials. The search for alternative solutions for waste treatment and valorization is increasingly relevant in the current context, mainly when focused on utilizing waste as a source of substitute materials. From the perspective of the new circular economy paradigm, this approach is of utmost importance as it aims to preserve natural resources and minimize the environmental impact associated with sending waste to landfills. In this sense, the incorporation of waste into industrial sectors that allow for the recovery of large volumes, such as road construction, becomes an urgent and necessary solution from an environmental standpoint. This study explores the use of plastic waste from the footwear industry as a substitute for virgin polymers in bitumen modification, a solution that presents a more sustainable future. Replacing conventional polymers with plastic waste in asphalt composition reduces the amount of waste sent to landfills and offers an opportunity to extend the lifespan of road infrastructures. By incorporating waste into construction materials, reducing the consumption of natural resources and the emission of pollutants is possible, promoting a more circular and efficient economy. In the initial phase of this study, waste materials from end-of-life footwear were selected, and plastic waste with the highest potential for application was separated. Based on a literature review, EVA (ethylene vinyl acetate) and PU (polyurethane) were identified as the polymers suitable for modifying 50/70 classification bitumen. Each polymer was analysed at concentrations of 3% and 5%. The production process involved the polymer's fragmentation to a size of 4 millimetres after heating the materials to 180 ºC and mixing for 10 minutes at low speed. After was mixed for 30 minutes in a high-speed mixer. The tests included penetration, softening point, viscosity, and rheological assessments. With the results obtained from the tests, the mixtures with EVA demonstrated better results than those with PU, as EVA had more resistance to temperature, a better viscosity curve and a greater elastic recovery in rheology.

Keywords: footwear waste, hot asphalt pavement, modified bitumen, polymers

Procedia PDF Downloads 4
249 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor

Authors: Hao Yan, Xiaobing Zhang

Abstract:

The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.

Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model

Procedia PDF Downloads 87
248 Early Outcomes and Lessons from the Implementation of a Geriatric Hip Fracture Protocol at a Level 1 Trauma Center

Authors: Peter Park, Alfonso Ayala, Douglas Saeks, Jordan Miller, Carmen Flores, Karen Nelson

Abstract:

Introduction Hip fractures account for more than 300,000 hospital admissions every year. Many present as fragility fractures in geriatric patients with multiple medical comorbidities. Standardized protocols for the multidisciplinary management of this patient population have been shown to improve patient outcomes. A hip fracture protocol was implemented at a Level I Trauma center with a focus on pre-operative medical optimization and early surgical care. This study evaluates the efficacy of that protocol, including the early transition period. Methods A retrospective review was performed of all patients ages 60 and older with isolated hip fractures who were managed surgically between 2020 and 2022. This included patients 1 year prior and 1 year following the implementation of a hip fracture protocol at a Level I Trauma center. Results 530 patients were identified: 249 patients were treated before, and 281 patients were treated after the protocol was instituted. There was no difference in mean age (p=0.35), gender (p=0.3), or Charlson Comorbidity Index (p=0.38) between the cohorts. Following the implementation of the protocol, there were observed increases in time to surgery (27.5h vs. 33.8h, p=0.01), hospital length of stay (6.3d vs. 9.7d, p<0.001), and ED LOS (5.1h vs. 6.2h, p<0.001). There were no differences in in-hospital mortality (2.01% pre vs. 3.20% post, p=0.39) and complication rates (25% pre vs 26% post, p=0.76). A trend towards improved outcomes was seen after the early transition period but failed to yield statistical significance. Conclusion Early medical management and surgical intervention are key determining factors affecting outcomes following fragility hip fractures. The implementation of a hip fracture protocol at this institution has not yet significantly affected these parameters. This could in part be due to the restrictions placed at this institution during the COVID-19 pandemic. Despite this, the time to OR pre-and post-implementation was quicker than figures reported elsewhere in literature. Further longitudinal data will be collected to determine the final influence of this protocol. Significance/Clinical Relevance Given the increasing number of elderly people and the high morbidity and mortality associated with hip fractures in this population finding cost effective ways to improve outcomes in the management of these injuries has the potential to have enormous positive impact for both patients and hospital systems.

Keywords: hip fracture, geriatric, treatment algorithm, preoperative optimization

Procedia PDF Downloads 71
247 An Adaptive Conversational AI Approach for Self-Learning

Authors: Airy Huang, Fuji Foo, Aries Prasetya Wibowo

Abstract:

In recent years, the focus of Natural Language Processing (NLP) development has been gradually shifting from the semantics-based approach to deep learning one, which performs faster with fewer resources. Although it performs well in many applications, the deep learning approach, due to the lack of semantics understanding, has difficulties in noticing and expressing a novel business case with a pre-defined scope. In order to meet the requirements of specific robotic services, deep learning approach is very labor-intensive and time consuming. It is very difficult to improve the capabilities of conversational AI in a short time, and it is even more difficult to self-learn from experiences to deliver the same service in a better way. In this paper, we present an adaptive conversational AI algorithm that combines both semantic knowledge and deep learning to address this issue by learning new business cases through conversations. After self-learning from experience, the robot adapts to the business cases originally out of scope. The idea is to build new or extended robotic services in a systematic and fast-training manner with self-configured programs and constructed dialog flows. For every cycle in which a chat bot (conversational AI) delivers a given set of business cases, it is trapped to self-measure its performance and rethink every unknown dialog flows to improve the service by retraining with those new business cases. If the training process reaches a bottleneck and incurs some difficulties, human personnel will be informed of further instructions. He or she may retrain the chat bot with newly configured programs, or new dialog flows for new services. One approach employs semantics analysis to learn the dialogues for new business cases and then establish the necessary ontology for the new service. With the newly learned programs, it completes the understanding of the reaction behavior and finally uses dialog flows to connect all the understanding results and programs, achieving the goal of self-learning process. We have developed a chat bot service mounted on a kiosk, with a camera for facial recognition and a directional microphone array for voice capture. The chat bot serves as a concierge with polite conversation for visitors. As a proof of concept. We have demonstrated to complete 90% of reception services with limited self-learning capability.

Keywords: conversational AI, chatbot, dialog management, semantic analysis

Procedia PDF Downloads 134
246 Cluster Analysis and Benchmarking for Performance Optimization of a Pyrochlore Processing Unit

Authors: Ana C. R. P. Ferreira, Adriano H. P. Pereira

Abstract:

Given the frequent variation of mineral properties throughout the Araxá pyrochlore deposit, even if a good homogenization work has been carried out before feeding the processing plants, an operation with quality and performance’s high variety standard is expected. These results could be improved and standardized if the blend composition parameters that most influence the processing route are determined, and then the types of raw materials are grouped by them, finally presenting a great reference with operational settings for each group. Associating the physical and chemical parameters of a unit operation through benchmarking or even an optimal reference of metallurgical recovery and product quality reflects in the reduction of the production costs, optimization of the mineral resource, and guarantee of greater stability in the subsequent processes of the production chain that uses the mineral of interest. Conducting a comprehensive exploratory data analysis to identify which characteristics of the ore are most relevant to the process route, associated with the use of Machine Learning algorithms for grouping the raw material (ore) and associating these with reference variables in the process’ benchmark is a reasonable alternative for the standardization and improvement of mineral processing units. Clustering methods through Decision Tree and K-Means were employed, associated with algorithms based on the theory of benchmarking, with criteria defined by the process team in order to reference the best adjustments for processing the ore piles of each cluster. A clean user interface was created to obtain the outputs of the created algorithm. The results were measured through the average time of adjustment and stabilization of the process after a new pile of homogenized ore enters the plant, as well as the average time needed to achieve the best processing result. Direct gains from the metallurgical recovery of the process were also measured. The results were promising, with a reduction in the adjustment time and stabilization when starting the processing of a new ore pile, as well as reaching the benchmark. Also noteworthy are the gains in metallurgical recovery, which reflect a significant saving in ore consumption and a consequent reduction in production costs, hence a more rational use of the tailings dams and life optimization of the mineral deposit.

Keywords: mineral clustering, machine learning, process optimization, pyrochlore processing

Procedia PDF Downloads 141
245 Music Piracy Revisited: Agent-Based Modelling and Simulation of Illegal Consumption Behavior

Authors: U. S. Putro, L. Mayangsari, M. Siallagan, N. P. Tjahyani

Abstract:

National Collective Management Institute (LKMN) in Indonesia stated that legal music products were about 77.552.008 unit while illegal music products were about 22.0688.225 unit in 1996 and this number keeps getting worse every year. Consequently, Indonesia named as one of the countries with high piracy levels in 2005. This study models people decision toward unlawful behavior, music content piracy in particular, using agent-based modeling and simulation (ABMS). The classification of actors in the model constructed in this study are legal consumer, illegal consumer, and neutral consumer. The decision toward piracy among the actors is a manifestation of the social norm which attributes are social pressure, peer pressure, social approval, and perceived prevalence of piracy. The influencing attributes fluctuate depending on the majority of surrounding behavior called social network. There are two main interventions undertaken in the model, campaign and peer influence, which leads to scenarios in the simulation: positively-framed descriptive norm message, negatively-framed descriptive norm message, positively-framed injunctive norm with benefits message, and negatively-framed injunctive norm with costs message. Using NetLogo, the model is simulated in 30 runs with 10.000 iteration for each run. The initial number of agent was set 100 proportion of 95:5 for illegal consumption. The assumption of proportion is based on the data stated that 95% sales of music industry are pirated. The finding of this study is that negatively-framed descriptive norm message has a worse reversed effect toward music piracy. The study discovers that selecting the context-based campaign is the key process to reduce the level of intention toward music piracy as unlawful behavior by increasing the compliance awareness. The context of Indonesia reveals that that majority of people has actively engaged in music piracy as unlawful behavior, so that people think that this illegal act is common behavior. Therefore, providing the information about how widespread and big this problem is could make people do the illegal consumption behavior instead. The positively-framed descriptive norm message scenario works best to reduce music piracy numbers as it focuses on supporting positive behavior and subject to the right perception on this phenomenon. Music piracy is not merely economical, but rather social phenomenon due to the underlying motivation of the actors which has shifted toward community sharing. The indication of misconception of value co-creation in the context of music piracy in Indonesia is also discussed. This study contributes theoretically that understanding how social norm configures the behavior of decision-making process is essential to breakdown the phenomenon of unlawful behavior in music industry. In practice, this study proposes that reward-based and context-based strategy is the most relevant strategy for stakeholders in music industry. Furthermore, this study provides an opportunity that findings may generalize well beyond music piracy context. As an emerging body of work that systematically constructs the backstage of law and social affect decision-making process, it is interesting to see how the model is implemented in other decision-behavior related situation.

Keywords: music piracy, social norm, behavioral decision-making, agent-based model, value co-creation

Procedia PDF Downloads 185
244 Barriers to Business Model Innovation in the Agri-Food Industry

Authors: Pia Ulvenblad, Henrik Barth, Jennie Cederholm BjöRklund, Maya Hoveskog, Per-Ola Ulvenblad

Abstract:

The importance of business model innovation (BMI) is widely recognized. This is also valid for firms in the agri-food industry, closely connected to global challenges. Worldwide food production will have to increase 70% by 2050 and the United Nations’ sustainable development goals prioritize research and innovation on food security and sustainable agriculture. The firms of the agri-food industry have opportunities to increase their competitive advantage through BMI. However, the process of BMI is complex and the implementation of new business models is associated with high degree of risk and failure. Thus, managers from all industries and scholars need to better understand how to address this complexity. Therefore, the research presented in this paper (i) explores different categories of barriers in research literature on business models in the agri-food industry, and (ii) illustrates categories of barriers with empirical cases. This study is addressing the rather limited understanding on barriers for BMI in the agri-food industry, through a systematic literature review (SLR) of 570 peer-reviewed journal articles that contained a combination of ‘BM’ or ‘BMI’ with agriculture-related and food-related terms (e.g. ‘agri-food sector’) published in the period 1990-2014. The study classifies the barriers in several categories and illustrates the identified barriers with ten empirical cases. Findings from the literature review show that barriers are mainly identified as outcomes. It can be assumed that a perceived barrier to growth can often be initially exaggerated or underestimated before being challenged by appropriate measures or courses of action. What may be considered by the public mind to be a barrier could in reality be very different from an actual barrier that needs to be challenged. One way of addressing barriers to growth is to define barriers according to their origin (internal/external) and nature (tangible/intangible). The framework encompasses barriers related to the firm (internal addressing in-house conditions) or to the industrial or national levels (external addressing environmental conditions). Tangible barriers can include asset shortages in the area of equipment or facilities, while human resources deficiencies or negative willingness towards growth are examples of intangible barriers. Our findings are consistent with previous research on barriers for BMI that has identified human factors barriers (individuals’ attitudes, histories, etc.); contextual barriers related to company and industry settings; and more abstract barriers (government regulations, value chain position, and weather). However, human factor barriers – and opportunities - related to family-owned businesses with idealistic values and attitudes and owning the real estate where the business is situated, are more frequent in the agri-food industry than other industries. This paper contributes by generating a classification of the barriers for BMI as well as illustrating them with empirical cases. We argue that internal barriers such as human factors barriers; values and attitudes are crucial to overcome in order to develop BMI. However, they can be as hard to overcome as for example institutional barriers such as governments’ regulations. Implications for research and practice are to focus on cognitive barriers and to develop the BMI capability of the owners and managers of agri-industry firms.

Keywords: agri-food, barriers, business model, innovation

Procedia PDF Downloads 228
243 A Retrospective Study: Correlation between Enterococcus Infections and Bone Carcinoma Incidence

Authors: Sonia A. Stoica, Lexi Frankel, Amalia Ardeljan, Selena Rashid, Ali Yasback, Omar Rashid

Abstract:

Introduction Enterococcus is a vast genus of lactic acid bacteria, gram-positivecocci species. They are common commensal organisms in the intestines of humans: E. faecalis (90–95%) and E. faecium (5–10%). Rare groups of infections can occur with other species, including E. casseliflavus, E. gallinarum, and E. raffinosus. The most common infections caused by Enterococcus include urinary tract infections, biliary tract infections, subacute endocarditis, diverticulitis, meningitis, septicemia, and spontaneous bacterial peritonitis. The treatment for sensitive strains of these bacteria includes ampicillin, penicillin, cephalosporins, or vancomycin, while the treatment for resistant strains includes daptomycin, linezolid, tygecycline, or streptogramine. Enterococcus faecalis CECT7121 is an encouraging nominee for being considered as a probiotic strain. E. faecalis CECT7121 enhances and skews the profile of cytokines to the Th1 phenotype in situations such as vaccination, anti-tumoral immunity, and allergic reactions. It also enhances the secretion of high levels of IL-12, IL-6, TNF alpha, and IL-10. Cytokines have been previously associated with the development of cancer. The intention of this study was to therefore evaluate the correlation between Enterococcus infections and incidence of bone carcinoma. Methods A retrospective cohort study (2010-2019) was conducted through a Health Insurance Portability and Accountability Act (HIPAA) compliant national database and conducted using International Classification of Disease (ICD) 9th and 10th codes for bone carcinoma diagnosis in a previously Enterococcus infected population. Patients were matched for age range and Charlson Comorbidity Index (CCI). Access to the database was granted by Holy Cross Health for academic research. Chi-squared test was used to assess statistical significance. Results A total number of 17,056 patients was obtained in Enterococcus infected group as well as in the control population (matched by Age range and CCI score). Subsequent bone carcinoma development was seen at a rate of 1.07% (184) in the Enterococcal infectious group and 3.42% (584) in the control group, respectively. The difference was statistically significant by p= 2.2x10-¹⁶, Odds Ratio = 0.355 (95% CI 0.311 - 0.404) Treatment for enterococcus infection was analyzed and controlled for in both enterococcus infected and noninfected populations. 78 out of 6,624 (1.17%) patients with a prior enterococcus infection and treated with antibiotics were compared to 202 out of 6,624 (3.04%) patients with no history of enterococcus infection (control) and received antibiotic treatment. Both populations subsequently developed bone carcinoma. Results remained statistically significant (p<2.2x10-), Odds Ratio=0.456 (95% CI 0.396-0.525). Conclusion This study shows a statistically significant correlation between Enterococcus infection and a decreased incidence of bone carcinoma. The immunologic response of the organism to Enterococcus infection may exert a protecting mechanism from developing bone carcinoma. Further exploration is needed to identify the potential mechanism of Enterococcus in reducing bone carcinoma incidence.

Keywords: anti-tumoral immunity, bone carcinoma, enterococcus, immunologic response

Procedia PDF Downloads 175
242 Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller

Authors: Samuel O. Alamu, Seong W. Lee, Blaise Kalmia, Marc J. Louise Caballes, Xuejun Qian

Abstract:

The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.

Keywords: air flow, biomass combustion, feedback control signal, fuel feeding, ladder logic, programmable logic controller, temperature

Procedia PDF Downloads 124
241 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 166
240 Sustainable Marine Tourism: Opinion and Segmentation of Italian Generation Z

Authors: M. Bredice, M. B. Forleo, L. Quici

Abstract:

Coastal tourism is currently facing huge challenges on how to balance environmental problems and tourist activities. Recent literature shows a growing interest in the issue of sustainable tourism from a so-called civilized tourists’ perspective by investigating opinions, perceptions, and behaviors. This study investigates the opinions of youth on what makes them responsible tourists and the ability of coastal marine areas to support tourism in future scenarios. A sample of 778 Italians attending the last year of high school was interviewed. Descriptive statistics, tests, and cluster analyses are applied to highlight the distribution of opinions among youth, detect significant differences based on demographic characteristics, and make segmentation of the different profiles based on students’ opinions and behaviors. Preliminary results show that students are largely convinced (62%) that by 2050 the quality of coastal environments could limit seaside tourism, while 10% of them believe that the problem can be solved simply by changing the tourist destination. Besides the cost of the holiday, the most relevant aspect respondents consider when choosing a marine destination is the presence of tourist attractions followed by the quality of the marine-coastal environment, the specificity of the local gastronomy and cultural traditions, and finally, the activities offered to guests such as sports and events. The reduction of waste and lower air emissions are considered the most important environmental areas in which marine-coastal tourism activities can contribute to preserving the quality of seas and coasts. Areas in which, as a tourist, they believe possible to give a personal contribution were (responses “very much” and “somewhat”); do not throw litter in the sea and on the beach (84%), do not buy single-use plastic products (66%), do not use soap or shampoo when showering in beaches (53%), do not have bonfires (47%), do not damage dunes (46%), and do not remove natural materials (e.g., sand, shells) from the beach (46%). About 6% of the sample stated that they were not interested in contributing to the aforementioned activities, while another 7% replied that they could not contribute at all. Finally, 80% of the sample has never participated in voluntary environmental initiatives or citizen science projects; moreover, about 64% of the students have never participated in events organized by environmental associations in marine or coastal areas. Regarding the test analysis -based on Kruskal-Wallis and Mann and Whitney tests - gender, region, and studying area of students reveals significance in terms of variables expressing knowledge and interest in sustainability topics and sustainable tourism behaviors. The classification of the education field is significant for a great number of variables, among which those related to several sustainable behaviors that respondents declare to be able to contribute as tourists. The ongoing cluster analysis will reveal different profiles in the sample and relevant variables. Based on preliminary results, implications are envisaged in the fields of education, policy, and business strategies for sustainable scenarios. Under these perspectives, the study has the potential to contribute to the conference debate about marine and coastal sustainable development and management.

Keywords: cluster analysis, education, knowledge, young people

Procedia PDF Downloads 74
239 Assessment of Potential Chemical Exposure to Betamethasone Valerate and Clobetasol Propionate in Pharmaceutical Manufacturing Laboratories

Authors: Nadeen Felemban, Hamsa Banjer, Rabaah Jaafari

Abstract:

One of the most common hazards in the pharmaceutical industry is the chemical hazard, which can cause harm or develop occupational health diseases/illnesses due to chronic exposures to hazardous substances. Therefore, a chemical agent management system is required, including hazard identification, risk assessment, controls for specific hazards and inspections, to keep your workplace healthy and safe. However, routine management monitoring is also required to verify the effectiveness of the control measures. Moreover, Betamethasone Valerate and Clobetasol Propionate are some of the APIs (Active Pharmaceutical Ingredients) with highly hazardous classification-Occupational Hazard Category (OHC 4), which requires a full containment (ECA-D) during handling to avoid chemical exposure. According to Safety Data Sheet, those chemicals are reproductive toxicants (reprotoxicant H360D), which may affect female workers’ health and cause fatal damage to an unborn child, or impair fertility. In this study, qualitative (chemical Risk assessment-qCRA) was conducted to assess the chemical exposure during handling of Betamethasone Valerate and Clobetasol Propionate in pharmaceutical laboratories. The outcomes of qCRA identified that there is a risk of potential chemical exposure (risk rating 8 Amber risk). Therefore, immediate actions were taken to ensure interim controls (according to the Hierarchy of controls) are in place and in use to minimize the risk of chemical exposure. No open handlings should be done out of the Steroid Glove Box Isolator (SGB) with the required Personal Protective Equipment (PPEs). The PPEs include coverall, nitrile hand gloves, safety shoes and powered air-purifying respirators (PAPR). Furthermore, a quantitative assessment (personal air sampling) was conducted to verify the effectiveness of the engineering controls (SGB Isolator) and to confirm if there is chemical exposure, as indicated earlier by qCRA. Three personal air samples were collected using an air sampling pump and filter (IOM2 filters, 25mm glass fiber media). The collected samples were analyzed by HPLC in the BV lab, and the measured concentrations were reported in (ug/m3) with reference to Occupation Exposure Limits, 8hr OELs (8hr TWA) for each analytic. The analytical results are needed in 8hr TWA (8hr Time-weighted Average) to be analyzed using Bayesian statistics (IHDataAnalyst). The results of the Bayesian Likelihood Graph indicate (category 0), which means Exposures are de "minimus," trivial, or non-existent Employees have little to no exposure. Also, these results indicate that the 3 samplings are representative samplings with very low variations (SD=0.0014). In conclusion, the engineering controls were effective in protecting the operators from such exposure. However, routine chemical monitoring is required every 3 years unless there is a change in the processor type of chemicals. Also, frequent management monitoring (daily, weekly, and monthly) is required to ensure the control measures are in place and in use. Furthermore, a Similar Exposure Group (SEG) was identified in this activity and included in the annual health surveillance for health monitoring.

Keywords: occupational health and safety, risk assessment, chemical exposure, hierarchy of control, reproductive

Procedia PDF Downloads 169
238 AIR SAFE: an Internet of Things System for Air Quality Management Leveraging Artificial Intelligence Algorithms

Authors: Mariangela Viviani, Daniele Germano, Simone Colace, Agostino Forestiero, Giuseppe Papuzzo, Sara Laurita

Abstract:

Nowadays, people spend most of their time in closed environments, in offices, or at home. Therefore, secure and highly livable environmental conditions are needed to reduce the probability of aerial viruses spreading. Also, to lower the human impact on the planet, it is important to reduce energy consumption. Heating, Ventilation, and Air Conditioning (HVAC) systems account for the major part of energy consumption in buildings [1]. Devising systems to control and regulate the airflow is, therefore, essential for energy efficiency. Moreover, an optimal setting for thermal comfort and air quality is essential for people’s well-being, at home or in offices, and increases productivity. Thanks to the features of Artificial Intelligence (AI) tools and techniques, it is possible to design innovative systems with: (i) Improved monitoring and prediction accuracy; (ii) Enhanced decision-making and mitigation strategies; (iii) Real-time air quality information; (iv) Increased efficiency in data analysis and processing; (v) Advanced early warning systems for air pollution events; (vi) Automated and cost-effective m onitoring network; and (vii) A better understanding of air quality patterns and trends. We propose AIR SAFE, an IoT-based infrastructure designed to optimize air quality and thermal comfort in indoor environments leveraging AI tools. AIR SAFE employs a network of smart sensors collecting indoor and outdoor data to be analyzed in order to take any corrective measures to ensure the occupants’ wellness. The data are analyzed through AI algorithms able to predict the future levels of temperature, relative humidity, and CO₂ concentration [2]. Based on these predictions, AIR SAFE takes actions, such as opening/closing the window or the air conditioner, to guarantee a high level of thermal comfort and air quality in the environment. In this contribution, we present the results from the AI algorithm we have implemented on the first s et o f d ata c ollected i n a real environment. The results were compared with other models from the literature to validate our approach.

Keywords: air quality, internet of things, artificial intelligence, smart home

Procedia PDF Downloads 89