Search results for: search algorithms
796 The Determinants of Co-Production for Value Co-Creation: Quadratic Effects
Authors: Li-Wei Wu, Chung-Yu Wang
Abstract:
Recently, interest has been generated in the search for a new reference framework for value creation that is centered on the co-creation process. Co-creation implies cooperative value creation between service firms and customers and requires the building of experiences as well as the resolution of problems through the combined effort of the parties in the relationship. For customers, values are always co-created through their participation in services. Customers can ultimately determine the value of the service in use. This new approach emphasizes that a customer’s participation in the service process is considered indispensable to value co-creation. An important feature of service in the context of exchange is co-production, which implies that a certain amount of participation is needed from customers to co-produce a service and hence co-create value. Co-production no doubt helps customers better understand and take charge of their own roles in the service process. Thus, this proposal is to encourage co-production, thus facilitating value co-creation of that is reflected in both customers and service firms. Four determinants of co-production are identified in this study, namely, commitment, trust, asset specificity, and decision-making uncertainty. Commitment is an essential dimension that directly results in successful cooperative behaviors. Trust helps establish a relational environment that is fundamental to cross-border cooperation. Asset specificity motivates co-production because this determinant may enhance return on asset investment. Decision-making uncertainty prompts customers to collaborate with service firms in making decisions. In other words, customers adjust their roles and are increasingly engaged in co-production when commitment, trust, asset specificity, and decision-making uncertainty are enhanced. Although studies have examined the preceding effects, to our best knowledge, none has empirically examined the simultaneous effects of all the curvilinear relationships in a single study. When these determinants are excessive, however, customers will not engage in co-production process. In brief, we suggest that the relationships of commitment, trust, asset specificity, and decision-making uncertainty with co-production are curvilinear or are inverse U-shaped. These new forms of curvilinear relationships have not been identified in existing literature on co-production; therefore, they complement extant linear approaches. Most importantly, we aim to consider both the bright and the dark sides of the determinants of co-production.Keywords: co-production, commitment, trust, asset specificity, decision-making uncertainty
Procedia PDF Downloads 188795 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble
Authors: Jaehong Yu, Seoung Bum Kim
Abstract:
Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking
Procedia PDF Downloads 339794 From Creativity to Innovation: Tracking Rejected Ideas
Authors: Lisete Barlach, Guilherme Ary Plonski
Abstract:
Innovative ideas are not always synonymous with business opportunities. Any idea can be creative and not recognized as a potential project in which money and time will be invested, among other resources. Even in firms that promote and enhance innovation, there are two 'check-points', the first corresponding to the acknowledgment of the idea as creative and the second, its consideration as a business opportunity. Both the recognition of new business opportunities or new ideas involve cognitive and psychological frameworks which provide individuals with a basis for noticing connections between seemingly independent events or trends as if they were 'connecting the dots'. It also involves prototypes-representing the most typical member of a certain category–functioning as 'templates' for this recognition. There is a general assumption that these kinds of evaluation processes develop through experience, explaining why expertise plays a central role in this process: the more experienced a professional, the easier for him (her) to identify new opportunities in business. But, paradoxically, an increase in expertise can lead to the inflexibility of thought due to automation of procedures. And, besides this, other cognitive biases can also be present, because new ideas or business opportunities generally depend on heuristics, rather than on established algorithms. The paper presents a literature review about the Einstellung effect by tracking famous cases of rejected ideas, extracted from historical records. It also presents the results of empirical research, with data upon rejected ideas gathered from two different environments: projects rejected during first semester of 2017 at a large incubator center in Sao Paulo and ideas proposed by employees that were rejected by a well-known business company, at its Brazilian headquarter. There is an implicit assumption that Einstellung effect tends to be more and more present in contemporaneity, due to time pressure upon decision-making and idea generation process. The analysis discusses desirability, viability, and feasibility as elements that affect decision-making.Keywords: cognitive biases, Einstellung effect, recognition of business opportunities, rejected ideas
Procedia PDF Downloads 204793 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength
Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos
Abstract:
Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.Keywords: statistical slope stability analysis, skew distributions, probability of failure, functions of random variables
Procedia PDF Downloads 338792 Conservation Planning of Paris Polyphylla Smith, an Important Medicinal Herb of the Indian Himalayan Region Using Predictive Distribution Modelling
Authors: Mohd Tariq, Shyamal K. Nandi, Indra D. Bhatt
Abstract:
Paris polyphylla Smith (Family- Liliaceae; English name-Love apple: Local name- Satuwa) is an important folk medicinal herb of the Indian subcontinent, being a source of number of bioactive compounds for drug formulation. The rhizomes are widely used as antihelmintic, antispasmodic, digestive stomachic, expectorant and vermifuge, antimicrobial, anti-inflammatory, heart and vascular malady, anti-fertility and sedative. Keeping in view of this, the species is being constantly removed from nature for trade and various pharmaceuticals purpose, as a result, the availability of the species in its natural habitat is decreasing. In this context, it would be pertinent to conserve this species and reintroduce them in its natural habitat. Predictive distribution modelling of this species was performed in Western Himalayan Region. One such recent method is Ecological Niche Modelling, also popularly known as Species distribution modelling, which uses computer algorithms to generate predictive maps of species distributions in a geographic space by correlating the point distributional data with a set of environmental raster data. In case of P. polyphylla, and to understand its potential distribution zones and setting up of artificial introductions, or selecting conservation sites, and conservation and management of their native habitat. Among the different districts of Uttarakhand (28°05ˈ-31°25ˈ N and 77°45ˈ-81°45ˈ E) Uttarkashi, Rudraprayag, Chamoli, Pauri Garhwal and some parts of Bageshwar, 'Maximum Entropy' (Maxent) has predicted wider potential distribution of P. polyphylla Smith. Distribution of P. polyphylla is mainly governed by Precipitation of Driest Quarter and Mean Diurnal Range i.e., 27.08% and 18.99% respectively which indicates that humidity (27%) and average temperature (19°C) might be suitable for better growth of Paris polyphylla.Keywords: biodiversity conservation, Indian Himalayan region, Paris polyphylla, predictive distribution modelling
Procedia PDF Downloads 330791 New Machine Learning Optimization Approach Based on Input Variables Disposition Applied for Time Series Prediction
Authors: Hervice Roméo Fogno Fotsoa, Germaine Djuidje Kenmoe, Claude Vidal Aloyem Kazé
Abstract:
One of the main applications of machine learning is the prediction of time series. But a more accurate prediction requires a more optimal model of machine learning. Several optimization techniques have been developed, but without considering the input variables disposition of the system. Thus, this work aims to present a new machine learning architecture optimization technique based on their optimal input variables disposition. The validations are done on the prediction of wind time series, using data collected in Cameroon. The number of possible dispositions with four input variables is determined, i.e., twenty-four. Each of the dispositions is used to perform the prediction, with the main criteria being the training and prediction performances. The results obtained from a static architecture and a dynamic architecture of neural networks have shown that these performances are a function of the input variable's disposition, and this is in a different way from the architectures. This analysis revealed that it is necessary to take into account the input variable's disposition for the development of a more optimal neural network model. Thus, a new neural network training algorithm is proposed by introducing the search for the optimal input variables disposition in the traditional back-propagation algorithm. The results of the application of this new optimization approach on the two single neural network architectures are compared with the previously obtained results step by step. Moreover, this proposed approach is validated in a collaborative optimization method with a single objective optimization technique, i.e., genetic algorithm back-propagation neural networks. From these comparisons, it is concluded that each proposed model outperforms its traditional model in terms of training and prediction performance of time series. Thus the proposed optimization approach can be useful in improving the accuracy of time series forecasts. This proves that the proposed optimization approach can be useful in improving the accuracy of time series prediction based on machine learning.Keywords: input variable disposition, machine learning, optimization, performance, time series prediction
Procedia PDF Downloads 109790 Decoding the Construction of Identity and Struggle for Self-Assertion in Toni Morrison and Selected Indian Authors
Authors: Madhuri Goswami
Abstract:
The matrix of power establishes the hegemonic dominance and supremacy of one group through exercising repression and relegation upon the other. However, the injustice done to any race, ethnicity, or caste has instigated the protest and resistance through various modes -social campaigns, political movements, literary expression and so on. Consequently, the search for identity, the means of claiming it and strive for recognition have evolved as the persistent phenomena all through the world. In the discourse of protest and minority literature, these two discourses -African American and Indian Dalit- surprisingly, share wrath and anger, hope and aspiration, and quest for identity and struggle for self-assertion. African American and Indian Dalit are two geographically and culturally apart communities that stand together on a single platform. This paper has sought to comprehend the form and investigate the formation of identity in general and in the literary work of Toni Morrison and Indian Dalit writing, particular, i.e., Black identity and Dalit identity. The study has speculated two types of identity, namely, individual or self and social or collective identity in the literary province of these marginalized literature. Morrison’s work outsources that self-identity is not merely a reflection of an inner essence; it is constructed through social circumstances and relations. Likewise, Dalit writings too have a fair record of discovery of self-hood and formation of identity, which connects to the realization of self-assertion and worthiness of their culture among Dalit writers. Bama, Pawar, Limbale, Pawde, and Kamble investigate their true self concealed amid societal alienation. The study has found that the struggle for recognition is, in fact, the striving to become the definer, instead of just being defined; and, this striving eventually, leads to the introspection among them. To conclude, Morrison as well as Indian marginalized authors, despite being set quite distant, communicate the relation between individual and community in the context of self-consciousness, self-identification and (self) introspection. This research opens a scope for further research to find out similar phenomena and trace an analogy in other world literatures.Keywords: identity, introspection, self-access, struggle for recognition
Procedia PDF Downloads 154789 Modified Weibull Approach for Bridge Deterioration Modelling
Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight
Abstract:
State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models
Procedia PDF Downloads 727788 Solving LWE by Pregressive Pumps and Its Optimization
Authors: Leizhang Wang, Baocang Wang
Abstract:
General Sieve Kernel (G6K) is considered as currently the fastest algorithm for the shortest vector problem (SVP) and record holder of open SVP challenge. We study the lattice basis quality improvement effects of the Workout proposed in G6K, which is composed of a series of pumps to solve SVP. Firstly, we use a low-dimensional pump output basis to propose a predictor to predict the quality of high-dimensional Pumps output basis. Both theoretical analysis and experimental tests are performed to illustrate that it is more computationally expensive to solve the LWE problems by using a G6K default SVP solving strategy (Workout) than these lattice reduction algorithms (e.g. BKZ 2.0, Progressive BKZ, Pump, and Jump BKZ) with sieving as their SVP oracle. Secondly, the default Workout in G6K is optimized to achieve a stronger reduction and lower computational cost. Thirdly, we combine the optimized Workout and the Pump output basis quality predictor to further reduce the computational cost by optimizing LWE instances selection strategy. In fact, we can solve the TU LWE challenge (n = 65, q = 4225, = 0:005) 13.6 times faster than the G6K default Workout. Fourthly, we consider a combined two-stage (Preprocessing by BKZ- and a big Pump) LWE solving strategy. Both stages use dimension for free technology to give new theoretical security estimations of several LWE-based cryptographic schemes. The security estimations show that the securities of these schemes with the conservative Newhope’s core-SVP model are somewhat overestimated. In addition, in the case of LAC scheme, LWE instances selection strategy can be optimized to further improve the LWE-solving efficiency even by 15% and 57%. Finally, some experiments are implemented to examine the effects of our strategies on the Normal Form LWE problems, and the results demonstrate that the combined strategy is four times faster than that of Newhope.Keywords: LWE, G6K, pump estimator, LWE instances selection strategy, dimension for free
Procedia PDF Downloads 60787 The Effectiveness of Concept Mapping as a Tool for Developing Critical Thinking in Undergraduate Medical Education: A BEME Systematic Review: BEME Guide No. 81
Authors: Marta Fonseca, Pedro Marvão, Beatriz Oliveira, Bruno Heleno, Pedro Carreiro-Martins, Nuno Neuparth, António Rendas
Abstract:
Background: Concept maps (CMs) visually represent hierarchical connections among related ideas. They foster logical organization and clarify idea relationships, potentially aiding medical students in critical thinking (to think clearly and rationally about what to do or what to believe). However, there are inconsistent claims about the use of CMs in undergraduate medical education. Our three research questions are: 1) What studies have been published on concept mapping in undergraduate medical education? 2) What was the impact of CMs on students’ critical thinking? 3) How and why have these interventions had an educational impact? Methods: Eight databases were systematically searched (plus a manual and an additional search were conducted). After eliminating duplicate entries, titles, and abstracts, and full-texts were independently screened by two authors. Data extraction and quality assessment of the studies were independently performed by two authors. Qualitative and quantitative data were integrated using mixed-methods. The results were reported using the structured approach to the reporting in healthcare education of evidence synthesis statement and BEME guidance. Results: Thirty-nine studies were included from 26 journals (19 quantitative, 8 qualitative and 12 mixed-methods studies). CMs were considered as a tool to promote critical thinking, both in the perception of students and tutors, as well as in assessing students’ knowledge and/or skills. In addition to their role as facilitators of knowledge integration and critical thinking, CMs were considered both teaching and learning methods. Conclusions: CMs are teaching and learning tools which seem to help medical students develop critical thinking. This is due to the flexibility of the tool as a facilitator of knowledge integration, as a learning and teaching method. The wide range of contexts, purposes, and variations in how CMs and instruments to assess critical thinking are used increase our confidence that the positive effects are consistent.Keywords: concept map, medical education, undergraduate, critical thinking, meaningful learning
Procedia PDF Downloads 125786 The Main Characteristics of Destructive Motivation
Authors: Elen Gasparyan, Naira Hakobyan
Abstract:
One of the leading factors determining the effectiveness of work in a modern organization is the motivation of its employees. In the scientific psychological literature, this phenomenon is understood mainly as constructive forms of motivation and the search for ways to increase it. At the same time, the motivation of employees can sometimes lead to a decrease in the productivity of the organization, i.e., destructive motivation is usually not considered from the point of view of various motivational theories. This article provides an analysis of various forms of destructive motivation of employees. These forms include formalism in labor behavior, inadequate assessment of the work done, and an imbalance of personal and organizational interests. The destructive motivation of personnel has certain negative consequences both for the employees themselves and for the entire organization - it leads to a decrease in the rate of production and the quality of products or services, increased conflict in the behavior of employees, etc. Currently, there is an increase in scientific interest in the study of destructive motivation. The subject of psychological research is not only modern socio-psychological processes but also the achievements of scientific thought in the field of theories of motivation and management. This article examines the theoretical approaches of J. S. Adams and Porter-Lawler, provides an analysis of theoretical concepts, and emphasizes the main characteristics of the destructiveness of motivation. Destructive work motivation is presented at the macro, meso, and micro levels. These levels express various directions of development of motivation stimuli, such as social, organizational, and personal ones. At the macro level, the most important characteristics of destructive motivation are the high-income gap between employers and employees, а high degree of unemployment, weak social protection of workers, non-compliance by employers with labor legislation, and emergencies. At the organizational level, the main characteristics are decreasing the diversity of work and insufficient work conditions. At the personal level, the main characteristic of destructive motivation is a discrepancy between personal and organizational interests. A comparative analysis of the theoretical and methodological foundations of the study of motivation makes it possible to identify not only the main characteristics of destructive motivation but also to determine the contours of psychological counseling to reduce destructiveness in the behavior of employees.Keywords: destructive, motivation, organization, behavior
Procedia PDF Downloads 42785 Exploring the Food Environments and Their Influence on Food Choices of Working Adults
Authors: Deepa Shokeen, Bani Tamber Aeri
Abstract:
Food environments are believed to play a significant role in the obesity epidemic and robust research methods are required to establish which factors or aspects of the food environment are relevant to food choice and to adiposity. The relationship between the food environment and obesity is complex. While there is little research linking food access with obesity as an outcome measure in any age group, with the help of this article we will try to understand the relationship between what we eat and the environmental context in which these food choices are made. Methods: A literature search of studies published between January 2000 and December 2013 was undertaken on computerized medical, social science, health, nutrition and education databases including Google, PubMed etc. Reports of organisations such as World Health Organisation (WHO), Centre for Chronic Disease Control (CCDC) were studied to project the data. Results: Studies show that food environments play a significant role in the obesity epidemic and robust research methods are required to establish which factors or aspects of the food environment are relevant to food choice and to adiposity. Evidence indicates that the food environment may help explain the obesity and cardio-metabolic risk factors among young adults. Conclusion: Cardiovascular disease is the ever growing chronic disease, the incidence of which will increase markedly in the coming decades. Therefore, it is the need of the hour to assess the prevalence of various risk factors that contribute to the incidence of cardiovascular diseases especially in the work environment. Research is required to establish how different environments affect different individuals as individuals interact with the environment on a number of levels. We need to ascertain the impact of selected food and nutrition environments (Information, organization, community, consumer) on food choice and dietary intake of the working adults as it is important to learn how these food environments influence the eating perceptions and health behaviour of the adults.Keywords: food environment, prevalence, cardiovascular disease, India, worksite, risk factors
Procedia PDF Downloads 401784 Safe and Scalable Framework for Participation of Nodes in Smart Grid Networks in a P2P Exchange of Short-Term Products
Authors: Maciej Jedrzejczyk, Karolina Marzantowicz
Abstract:
Traditional utility value chain is being transformed during last few years into unbundled markets. Increased distributed generation of energy is one of considerable challenges faced by Smart Grid networks. New sources of energy introduce volatile demand response which has a considerable impact on traditional middlemen in E&U market. The purpose of this research is to search for ways to allow near-real-time electricity markets to transact with surplus energy based on accurate time synchronous measurements. A proposed framework evaluates the use of secure peer-2-peer (P2P) communication and distributed transaction ledgers to provide flat hierarchy, and allow real-time insights into present and forecasted grid operations, as well as state and health of the network. An objective is to achieve dynamic grid operations with more efficient resource usage, higher security of supply and longer grid infrastructure life cycle. Methods used for this study are based on comparative analysis of different distributed ledger technologies in terms of scalability, transaction performance, pluggability with external data sources, data transparency, privacy, end-to-end security and adaptability to various market topologies. An intended output of this research is a design of a framework for safer, more efficient and scalable Smart Grid network which is bridging a gap between traditional components of the energy network and individual energy producers. Results of this study are ready for detailed measurement testing, a likely follow-up in separate studies. New platforms for Smart Grid achieving measurable efficiencies will allow for development of new types of Grid KPI, multi-smart grid branches, markets, and businesses.Keywords: autonomous agents, Distributed computing, distributed ledger technologies, large scale systems, micro grids, peer-to-peer networks, Self-organization, self-stabilization, smart grids
Procedia PDF Downloads 300783 Systematic Review and Meta-analysis Investigating the Efficacy of Walking-based Aerobic Exercise Interventions to Treat Postpartum Depression
Authors: V. Pentland, S. Spilsbury, A. Biswas, M. F. Mottola, S. Paplinskie, M. S. Mitchell
Abstract:
Postpartum depression (PPD) is a form of major depressive disorder that afflicts 10–22% of mothers worldwide. Rising demands for traditional PPD treatment options (e.g., psychiatry), especially in the context of the COVID-19 pandemic, are increasingly difficult to meet. More accessible treatment options (e.g., walking) are needed. The objective of this review is to determine the impact of walking on PPD severity. A structured search of seven electronic databases for randomised controlled trials published between 2000 and July 29, 2021, was completed. Studies were included if walking was the sole or primary aerobic exercise modality. A random-effects meta-analysis was conducted for studies reporting PPD symptoms measured using a clinically validated tool. A simple count of positive/null effect studies was undertaken as part of a narrative summary. Five studies involving 242 participants were included (mean age=~28.9 years; 100% with mild-to-moderate depression). Interventions were 12 (n=4) and 24 (n=1) weeks long. Each assessed PPD severity using the Edinburgh Postnatal Depression Scale (EPDS) and was included in the meta-analysis. The pooled effect estimate suggests that relative to controls, walking yielded clinically significant decreases in mean EPDS scores from baseline to intervention end (pooled MD=-4.01; 95% CI:-7.18 to -0.84, I2=86%). The narrative summary provides preliminary evidence that walking-only, supervised, and group-based interventions, including 90-120+ minutes/week of moderate-intensity walking, may produce greater EPDS reductions. While limited by a relatively small number of included studies, pooled effect estimates suggest walking may help mothers manage PPD. This is the first time walking as a treatment for PPD, an exercise modality that uniquely addresses many barriers faced by mothers has been summarized in a systematic way. Trial registration: PROSPERO (CRD42020197521) on August 16th, 2020Keywords: postpartum, exercise, depression, walking
Procedia PDF Downloads 204782 Effect of Concentration Level and Moisture Content on the Detection and Quantification of Nickel in Clay Agricultural Soil in Lebanon
Authors: Layan Moussa, Darine Salam, Samir Mustapha
Abstract:
Heavy metal contamination in agricultural soils in Lebanon poses serious environmental and health problems. Intensive efforts are employed to improve existing quantification methods of heavy metals in contaminated environments since conventional detection techniques have shown to be time-consuming, tedious, and costly. The implication of hyperspectral remote sensing in this field is possible and promising. However, factors impacting the efficiency of hyperspectral imaging in detecting and quantifying heavy metals in agricultural soils were not thoroughly studied. This study proposes to assess the use of hyperspectral imaging for the detection of Ni in agricultural clay soil collected from the Bekaa Valley, a major agricultural area in Lebanon, under different contamination levels and soil moisture content. Soil samples were contaminated with Ni, with concentrations ranging from 150 mg/kg to 4000 mg/kg. On the other hand, soil with background contamination was subjected to increased moisture levels varying from 5 to 75%. Hyperspectral imaging was used to detect and quantify Ni contamination in the soil at different contamination levels and moisture content. IBM SPSS statistical software was used to develop models that predict the concentration of Ni and moisture content in agricultural soil. The models were constructed using linear regression algorithms. The spectral curves obtained reflected an inverse correlation between both Ni concentration and moisture content with respect to reflectance. On the other hand, the models developed resulted in high values of predicted R2 of 0.763 for Ni concentration and 0.854 for moisture content. Those predictions stated that Ni presence was well expressed near 2200 nm and that of moisture was at 1900 nm. The results from this study would allow us to define the potential of using the hyperspectral imaging (HSI) technique as a reliable and cost-effective alternative for heavy metal pollution detection in contaminated soils and soil moisture prediction.Keywords: heavy metals, hyperspectral imaging, moisture content, soil contamination
Procedia PDF Downloads 101781 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer
Procedia PDF Downloads 136780 Research on the Ecological Impact Evaluation Index System of Transportation Construction Projects
Authors: Yu Chen, Xiaoguang Yang, Lin Lin
Abstract:
Traffic engineering construction is an important infrastructure for economic and social development. In the process of construction and operation, the ability to make a correct evaluation of the project's environmental impact appears to be crucial to the rational operation of existing transportation projects, the correct development of transportation engineering construction and the adoption of corresponding measures to scientifically carry out environmental protection work. Most of the existing research work on ecological and environmental impact assessment is limited to individual aspects of the environment and less to the overall evaluation of the environmental system; in terms of research conclusions, there are more qualitative analyses from the technical and policy levels, and there is a lack of quantitative research results and quantitative and operable evaluation models. In this paper, a comprehensive analysis of the ecological and environmental impacts of transportation construction projects is conducted, and factors such as the accessibility of data and the reliability of calculation results are comprehensively considered to extract indicators that can reflect the essence and characteristics. The qualitative evaluation indicators were screened using the expert review method, the qualitative indicators were measured using the fuzzy statistics method, the quantitative indicators were screened using the principal component analysis method, and the quantitative indicators were measured by both literature search and calculation. An environmental impact evaluation index system with the general objective layer, sub-objective layer and indicator layer was established, dividing the environmental impact of the transportation construction project into two periods: the construction period and the operation period. On the basis of the evaluation index system, the index weights are determined using the hierarchical analysis method, and the individual indicators to be evaluated are dimensionless, eliminating the influence of the original background and meaning of the indicators. Finally, the thesis uses the above research results, combined with the actual engineering practice, to verify the correctness and operability of the evaluation method.Keywords: transportation construction projects, ecological and environmental impact, analysis and evaluation, indicator evaluation system
Procedia PDF Downloads 105779 Quantitative Evaluation of Supported Catalysts Key Properties from Electron Tomography Studies: Assessing Accuracy Using Material-Realistic 3D-Models
Authors: Ainouna Bouziane
Abstract:
The ability of Electron Tomography to recover the 3D structure of catalysts, with spatial resolution in the subnanometer scale, has been widely explored and reviewed in the last decades. A variety of experimental techniques, based either on Transmission Electron Microscopy (TEM) or Scanning Transmission Electron Microscopy (STEM) have been used to reveal different features of nanostructured catalysts in 3D, but High Angle Annular Dark Field imaging in STEM mode (HAADF-STEM) stands out as the most frequently used, given its chemical sensitivity and avoidance of imaging artifacts related to diffraction phenomena when dealing with crystalline materials. In this regard, our group has developed a methodology that combines image denoising by undecimated wavelet transforms (UWT) with automated, advanced segmentation procedures and parameter selection methods using CS-TVM (Compressed Sensing-total variation minimization) algorithms to reveal more reliable quantitative information out of the 3D characterization studies. However, evaluating the accuracy of the magnitudes estimated from the segmented volumes is also an important issue that has not been properly addressed yet, because a perfectly known reference is needed. The problem particularly complicates in the case of multicomponent material systems. To tackle this key question, we have developed a methodology that incorporates volume reconstruction/segmentation methods. In particular, we have established an approach to evaluate, in quantitative terms, the accuracy of TVM reconstructions, which considers the influence of relevant experimental parameters like the range of tilt angles, image noise level or object orientation. The approach is based on the analysis of material-realistic, 3D phantoms, which include the most relevant features of the system under analysis.Keywords: electron tomography, supported catalysts, nanometrology, error assessment
Procedia PDF Downloads 87778 A Novel Epitope Prediction for Vaccine Designing against Ebola Viral Envelope Proteins
Authors: Manju Kanu, Subrata Sinha, Surabhi Johari
Abstract:
Viral proteins of Ebola viruses belong to one of the best studied viruses; however no effective prevention against EBOV has been developed. Epitope-based vaccines provide a new strategy for prophylactic and therapeutic application of pathogen-specific immunity. A critical requirement of this strategy is the identification and selection of T-cell epitopes that act as vaccine targets. This study describes current methodologies for the selection process, with Ebola virus as a model system. Hence great challenge in the field of ebola virus research is to design universal vaccine. A combination of publicly available bioinformatics algorithms and computational tools are used to screen and select antigen sequences as potential T-cell epitopes of supertypes Human Leukocyte Antigen (HLA) alleles. MUSCLE and MOTIF tools were used to find out most conserved peptide sequences of viral proteins. Immunoinformatics tools were used for prediction of immunogenic peptides of viral proteins in zaire strains of Ebola virus. Putative epitopes for viral proteins (VP) were predicted from conserved peptide sequences of VP. Three tools NetCTL 1.2, BIMAS and Syfpeithi were used to predict the Class I putative epitopes while three tools, ProPred, IEDB-SMM-align and NetMHCII 2.2 were used to predict the Class II putative epitopes. B cell epitopes were predicted by BCPREDS 1.0. Immunogenic peptides were identified and selected manually by putative epitopes predicted from online tools individually for both MHC classes. Finally sequences of predicted peptides for both MHC classes were looked for common region which was selected as common immunogenic peptide. The immunogenic peptides were found for viral proteins of Ebola virus: epitopes FLESGAVKY, SSLAKHGEY. These predicted peptides could be promising candidates to be used as target for vaccine design.Keywords: epitope, b cell, immunogenicity, ebola
Procedia PDF Downloads 314777 Risk Factors Associated with Outbreak of Cerebrospinal Meningitis in Kano State- Nigeria, March-May 2017
Authors: Visa I. Tyakaray, M. Abdulaziz, O. Badmus, N. Karaye, M. Dalhat, A. Shehu, I. Bello, T. Hussaini, S. Akar, G. Effah, P. Nguku
Abstract:
Introduction: Nigeria has recorded outbreaks of meningitis in the past, being in the meningitis belt. A multi-state outbreak of Cerebrospinal Meningitis (CSM) from Neisseria meningitides occurred in 2017 involving 24 states, and Kano State reported its first two confirmed CSM cases on 22nd March, 2017. We conducted the outbreak investigation to characterize the outbreak, determine its associated risk factors and institute appropriate control measures. Method: We conducted an unmatched Case-control study with ratio 1:2. A case was defined as any person with sudden onset of fever (>38.5˚C rectal or 38.0˚C axillary) and one of the following: neck stiffness, altered consciousness or bulging fontanelle in toddlers while a control was defined as any person who resides around the case such as family members, caregivers, neighbors, and healthcare personnel. We reviewed and validated line list and conducted active case search in health facilities and neighboring communities. Descriptive, bivariate, stratified and multivariate analysis were performed. Laboratory confirmation was by Latex agglutination and/or Culture. Results: We recruited 48 cases with median age of 11 years (1 month – 65 years), attack rate was 2.4/100,000 population with case fatality rate of 8%; 34 of 44 local government areas were affected.On stratification, age was found to be a confounder. Independent factors associated with the outbreak were age (Adjusted Odds Ratio, AOR =6.58; 95% Confidence Interval (CI) =2.85-15.180, history of Vaccination (AOR=0.37; 95% CI=0.13-0.99) and history of travel (AOR=10.16; (1.99-51.85). Laboratory results showed 22 positive cases for Neisseria meningitides types C and A/Y. Conclusion: Major risk factors associated with this outbreak were age (>14years), not being vaccinated and history of travel. We sensitized communities and strengthened case management. We recommended immediate reactive vaccination and enhanced surveillance in bordering communities.Keywords: cerebrospinal, factors, Kano-Nigeria, meningitis, risk
Procedia PDF Downloads 215776 [Keynote Speech]: Curiosity, Innovation and Technological Advancements Shaping the Future of Science, Technology, Engineering and Mathematics Education
Authors: Ana Hol
Abstract:
We live in a constantly changing environment where technology has become an integral component of our day to day life. We rely heavily on mobile devices, we search for data via web, we utilise smart home sensors to create the most suited ambiences and we utilise applications to shop, research, communicate and share data. Heavy reliance on technology therefore is creating new connections between STEM (Science, Technology, Engineering and Mathematics) fields which in turn rises a question of what the STEM education of the future should be like? This study was based on the reviews of the six Australian Information Systems students who undertook an international study tour to India where they were given an opportunity to network, communicate and meet local students, staff and business representatives and from them learn about the local business implementations, local customs and regulations. Research identifies that if we are to continue to implement and utilise electronic devices on the global scale, such as for example implement smart cars that can smoothly cross borders, we will need the workforce that will have the knowledge about the cars themselves, their parts, roads and transport networks, road rules, road sensors, road monitoring technologies, graphical user interfaces, movement detection systems as well as day to day operations, legal rules and regulations of each region and country, insurance policies, policing and processes so that the wide array of sensors can be controlled across country’s borders. In conclusion, it can be noted that allowing students to learn about the local conditions, roads, operations, business processes, customs and values in different countries is giving students a cutting edge advantage as such knowledge cannot be transferred via electronic sources alone. However once understanding of each problem or project is established, multidisciplinary innovative STEM projects can be smoothly conducted.Keywords: STEM, curiosity, innovation, advancements
Procedia PDF Downloads 199775 The Crossroad of Identities in Wajdi Mouawad's 'Littoral': A Rhizomatic Approach of Identity Reconstruction through Theatre and Performance
Authors: Mai Hussein
Abstract:
'Littoral' is an original voice in Québécois theatre, spanning the cultural gaps that can exist between the playwrights’ native Lebanon, North America, Quebec, and Europe. Littoral is a 'crossroad' of cultures and themes, a 'bridge' connecting cultures and languages. It represents a new form of theatrical writing that combines the verbal, the vocal and the pantomimic, calling upon the stage to question the real, to engage characters in a quest, in a journey of mourning, of reconstructing identity and a collective memory despite ruins and wars. A theatre of witness, a theatre denouncing irrationality of racism and war, a theatre 'performing' the symptoms of the stress disorders of characters passing from resistance and anger to reconciliation and giving voice to the silenced victims, these are some of the pillars that this play has to offer. In this corrida between life and death, the identity seems like a work-in-progress that is shaped in the presence of the Self and the Other. This trajectory will lead to re-open widely the door to questions, interrogations, and reflections to show how this play is at the nexus of contemporary preoccupations of the 21st century: the importance of memory, the search for meaning, the pursuit of the infinite. It also shows how a play can create bridges between languages, cultures, societies, and movements. To what extent does it mediate between the words and the silence, and how does it burn the bridges or the gaps between the textual and the performative while investigating the power of intermediality to confront racism and segregation. It also underlines the centrality of confrontation between cultures, languages, writing and representation techniques to challenge the characters in their quest to restructure their shattered, but yet intertwined identities. The goal of this theatre would then be to invite everyone involved in the process of a journey of self-discovery away from their comfort zone. Everyone will have to explore the liminal space, to read in between the lines of the written text as well as in between the text and the performance to explore the gaps and the tensions that exist between what is said, and what is played, between the 'parole' and the performative body.Keywords: identity, memory, performance, testimony, trauma
Procedia PDF Downloads 115774 A Systematic Review on Development of a Cost Estimation Framework: A Case Study of Nigeria
Authors: Babatunde Dosumu, Obuks Ejohwomu, Akilu Yunusa-Kaltungo
Abstract:
Cost estimation in construction is often difficult, particularly when dealing with risks and uncertainties, which are inevitable and peculiar to developing countries like Nigeria. Direct consequences of these are major deviations in cost, duration, and quality. The fundamental aim of this study is to develop a framework for assessing the impacts of risk on cost estimation, which in turn causes variabilities between contract sum and final account. This is very important, as initial estimates given to clients should reflect the certain magnitude of consistency and accuracy, which the client builds other planning-related activities upon, and also enhance the capabilities of construction industry professionals by enabling better prediction of the final account from the contract sum. In achieving this, a systematic literature review was conducted with cost variability and construction projects as search string within three databases: Scopus, Web of science, and Ebsco (Business source premium), which are further analyzed and gap(s) in knowledge or research discovered. From the extensive review, it was found that factors causing deviation between final accounts and contract sum ranged between 1 and 45. Besides, it was discovered that a cost estimation framework similar to Building Cost Information Services (BCIS) is unavailable in Nigeria, which is a major reason why initial estimates are very often inconsistent, leading to project delay, abandonment, or determination at the expense of the huge sum of money invested. It was concluded that the development of a cost estimation framework that is adjudged an important tool in risk shedding rather than risk-sharing in project risk management would be a panacea to cost estimation problems, leading to cost variability in the Nigerian construction industry by the time this ongoing Ph.D. research is completed. It was recommended that practitioners in the construction industry should always take into account risk in order to facilitate the rapid development of the construction industry in Nigeria, which should give stakeholders a more in-depth understanding of the estimation effectiveness and efficiency to be adopted by stakeholders in both the private and public sectors.Keywords: cost variability, construction projects, future studies, Nigeria
Procedia PDF Downloads 209773 Fluid Prescribing Post Laparotomies
Authors: Gusa Hall, Barrie Keeler, Achal Khanna
Abstract:
Introduction: NICE guidelines have highlighted the consequences of IV fluid mismanagement. The main aim of this study was to audit fluid prescribing post laparotomies to identify if fluids were prescribed in accordance to NICE guidelines. Methodology: Retrospective database search of eight specific laparotomy procedures (colectomy right and left, Hartmann’s procedure, small bowel resection, perforated ulcer, abdominal perineal resection, anterior resection, pan proctocolectomy, subtotal colectomy) highlighted 29 laparotomies between April 2019 and May 2019. Two of 29 patients had secondary procedures during the same admission, n=27 (patients). Database case notes were reviewed for date of procedure, length of admission, fluid prescribed and amount, nasal gastric tube output, daily bloods results for electrolytes sodium and potassium and operational losses. Results: n=27 based on 27 identified patients between April 2019 – May 2019, 93% (25/27) received IV fluids, only 19% (5/27) received the correct IV fluids in accordance to NICE guidelines, 93% (25/27) who received IV fluids had the correct electrolytes levels (sodium & potassium), 100% (27/27) patients received blood tests (U&E’s) for correct electrolytes levels. 0% (0/27) no documentation on operational losses. IV fluids matched nasogastric tube output in 100% (3/3) of the number of patients that had a nasogastric tube in situ. Conclusion: A PubMed database literature review on barriers to safer IV prescribing highlighted educational interventions focused on prescriber knowledge rather than how to execute the prescribing task. This audit suggests IV fluids post laparotomies are not being prescribed consistently in accordance to NICE guidelines. Surgical management plans should be clearer on IV fluids and electrolytes requirements for the following 24 hours after the plan has been initiated. In addition, further teaching and training around IV prescribing is needed together with frequent surgical audits on IV fluid prescribing post-surgery to evaluate improvements.Keywords: audit, IV Fluid prescribing, laparotomy, NICE guidelines
Procedia PDF Downloads 120772 An Assessment of Floodplain Vegetation Response to Groundwater Changes Using the Soil & Water Assessment Tool Hydrological Model, Geographic Information System, and Machine Learning in the Southeast Australian River Basin
Authors: Newton Muhury, Armando A. Apan, Tek N. Marasani, Gebiaw T. Ayele
Abstract:
The changing climate has degraded freshwater availability in Australia that influencing vegetation growth to a great extent. This study assessed the vegetation responses to groundwater using Terra’s moderate resolution imaging spectroradiometer (MODIS), Normalised Difference Vegetation Index (NDVI), and soil water content (SWC). A hydrological model, SWAT, has been set up in a southeast Australian river catchment for groundwater analysis. The model was calibrated and validated against monthly streamflow from 2001 to 2006 and 2007 to 2010, respectively. The SWAT simulated soil water content for 43 sub-basins and monthly MODIS NDVI data for three different types of vegetation (forest, shrub, and grass) were applied in the machine learning tool, Waikato Environment for Knowledge Analysis (WEKA), using two supervised machine learning algorithms, i.e., support vector machine (SVM) and random forest (RF). The assessment shows that different types of vegetation response and soil water content vary in the dry and wet seasons. The WEKA model generated high positive relationships (r = 0.76, 0.73, and 0.81) between NDVI values of all vegetation in the sub-basins against soil water content (SWC), the groundwater flow (GW), and the combination of these two variables, respectively, during the dry season. However, these responses were reduced by 36.8% (r = 0.48) and 13.6% (r = 0.63) against GW and SWC, respectively, in the wet season. Although the rainfall pattern is highly variable in the study area, the summer rainfall is very effective for the growth of the grass vegetation type. This study has enriched our knowledge of vegetation responses to groundwater in each season, which will facilitate better floodplain vegetation management.Keywords: ArcSWAT, machine learning, floodplain vegetation, MODIS NDVI, groundwater
Procedia PDF Downloads 101771 Nutritional Characteristics, Phytochemical and Antimicrobial Properties Vaccinium Pavifolium (Ericacea) Leaf Protein Concentrates
Authors: Sodamade A., Bolaji K. A.
Abstract:
Problems associated with protein malnutrition are still prevalent in third-world countries, leading to the constant search for plants that could serve as nutrients and medicinal purposes. Huckleberry is one of the plants that has been proven useful locally in the treatment of numerous ailments and diseases. A fresh sample of the plant (Vaccinium pavifolium) was collected from a vegetable garden situated near the Erelu dam of the Emmanuel Alayande College of Education Campus, Oyo. The sample was authenticated at the Forestry Research Institute of Nigeria (FRIN) Ibadan. The leaves of the plant were plucked and processed for leaf protein concentrates before proximate composition, mineral analysis phytochemical and antimicrobial properties were determined using a standard method of analysis. The results of proximate constituents showed; moisture content; 9.89±0.051g/100g, Ash; 3.23±0.12g/100g, crude fat; 3.96±0.11g/100g and 61.27±0.56g/100g of Nitrogen free extractive. The mineral analysis of the sample showed; Mg; 0.081±0.00mg/100g, Ca; 42.30±0.05mg/100g, Na; 27.57±0.09mg/100g, K; 6.81±0.01mg/100g, P; 8.90±0.03mg/100g, Fe; 0.51±0.00mg/100g, Zn; 0.021±0.00mg/100g, Cd; 0.04±0.04mg/100g, Pb; 0.002±0.00mg/100g, Cr; 0.041±0.00mg/100g Cadmium and Mercury were not detected in the sample. The result of phytochemical analysis of leaf protein concentrates of the Huckleberry showed the presence of Alkaloid, Saponin, Flavonoid, Tanin, Coumarin, Steroids, Terpenoids, Cardiac glycosides, Glycosides, Quinones, Anthocyanin, phytosterols, and phenols. Ethanolic extracts of the Vaccinium parvifolium L. leaf protein concentrates showed that it contains bioactive compounds that are capable of combating the following microorganisms; Staphylococcus aureus, Streptococcus pyogenes, Streptococcus faecalis, Pseudomonas aeruginosa, Klebisialae pneumonia and Proteus mirabilis. The results of the analysis of Vaccinium parvifolium L. leaf protein concentrates showed that the sample contains valuable nutrient and mineral constituents, and phytochemical compounds that could make the sample useful for medicinal activities.Keywords: leaf protein concentrates, vaccinium parvifolium, nutritional characteristics, mineral composition, antimicrobial activity
Procedia PDF Downloads 78770 A Cross-Cultural Approach for Communication with Biological and Non-Biological Intelligences
Authors: Thomas Schalow
Abstract:
This paper posits the need to take a cross-cultural approach to communication with non-human cultures and intelligences in order to meet the following three imminent contingencies: communicating with sentient biological intelligences, communicating with extraterrestrial intelligences, and communicating with artificial super-intelligences. The paper begins with a discussion of how intelligence emerges. It disputes some common assumptions we maintain about consciousness, intention, and language. The paper next explores cross-cultural communication among humans, including non-sapiens species. The next argument made is that we need to become much more serious about communicating with the non-human, intelligent life forms that already exist around us here on Earth. There is an urgent need to broaden our definition of communication and reach out to the other sentient life forms that inhabit our world. The paper next examines the science and philosophy behind CETI (communication with extraterrestrial intelligences) and how it has proven useful, even in the absence of contact with alien life. However, CETI’s assumptions and methodology need to be revised and based on the cross-cultural approach to communication proposed in this paper if we are truly serious about finding and communicating with life beyond Earth. The final theme explored in this paper is communication with non-biological super-intelligences using a cross-cultural communication approach. This will present a serious challenge for humanity, as we have never been truly compelled to converse with other species, and our failure to seriously consider such intercourse has left us largely unprepared to deal with communication in a future that will be mediated and controlled by computer algorithms. Fortunately, our experience dealing with other human cultures can provide us with a framework for this communication. The basic assumptions behind intercultural communication can be applied to the many types of communication envisioned in this paper if we are willing to recognize that we are in fact dealing with other cultures when we interact with other species, alien life, and artificial super-intelligence. The ideas considered in this paper will require a new mindset for humanity, but a new disposition will prepare us to face the challenges posed by a future dominated by artificial intelligence.Keywords: artificial intelligence, CETI, communication, culture, language
Procedia PDF Downloads 358769 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite
Authors: F. Lazzeri, I. Reiter
Abstract:
Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.
Keywords: time-series, features engineering methods for forecasting, energy demand forecasting, Azure Machine Learning
Procedia PDF Downloads 297768 Pediatric Health Nursing Research in Jordan: Evaluating the State of Knowledge and Determining Future Research Direction
Authors: Inaam Khalaf, Nadin M. Abdel Razeq, Hamza Alduraidi, Suhaila Halasa, Omayyah S. Nassar, Eman Al-Horani, Jumana Shehadeh, Anna Talal
Abstract:
Background: Nursing researchers are responsible for generating knowledge that corresponds to national and global research priorities in order to promote, restore, and maintain the health of individuals and societies. The objectives of this scoping review of Jordanian literature are to assess the existing research on pediatric nursing in terms of evolution, authorship and collaborations, funding sources, methodologies, topics of research, and pediatric subjects' age groups so as to identify gaps in research. Methodology: A search was conducted using related keywords obtained from national and international databases. The reviewed literature included pediatric health articles published through December 2019 in English and Arabic, authored by nursing researchers. The investigators assessed the retrieved studies and extracted data using a data-mining checklist. Results: The review included 265 articles authored by Jordanian nursing researchers concerning children's health, published between 1987 and 2019; 95% were published between 2009 and 2019. The most commonly applied research methodology was the descriptive non-experimental method (76%). The main generic topics were health promotion and disease prevention (23%), chronic physical conditions (19%), mental health, behavioral disorders, and forensic issues (16%). Conclusion: The review findings identified a grave shortage of evidence concerning nursing care issues for children below five years of age, especially those between ages two and five years. The research priorities identified in this review resonate with those identified in international reports. Implications: Nursing researchers are encouraged to conduct more research targeting topics of national-level importance in collaboration with clinically involved nurses and international scholars.Keywords: Jordan, scoping review, children health nursing, pediatric, adolescents
Procedia PDF Downloads 86767 Ni-W-P Alloy Coating as an Alternate to Electroplated Hard Cr Coating
Authors: S. K. Ghosh, C. Srivastava, P. K. Limaye, V. Kain
Abstract:
Electroplated hard chromium is widely known in coatings and surface finishing, automobile and aerospace industries because of its excellent hardness, wear resistance and corrosion properties. However, its precursor, Cr+6 is highly carcinogenic in nature and a consensus has been adopted internationally to eradicate this coating technology with an alternative one. The search for alternate coatings to electroplated hard chrome is continuing worldwide. Various alloys and nanocomposites like Co-W alloys, Ni-Graphene, Ni-diamond nanocomposites etc. have already shown promising results in this regard. Basically, in this study, electroless Ni-P alloys with excellent corrosion resistance was taken as the base matrix and incorporation of tungsten as third alloying element was considered to improve the hardness and wear resistance of the resultant alloy coating. The present work is focused on the preparation of Ni–W–P coatings by electrodeposition with different content of phosphorous and its effect on the electrochemical, mechanical and tribological performances. The results were also compared with Ni-W alloys. Composition analysis by EDS showed deposition of Ni-32.85 wt% W-3.84 wt% P (designated as Ni-W-LP) and Ni-18.55 wt% W-8.73 wt% P (designated as Ni-W-HP) alloy coatings from electrolytes containing of 0.006 and 0.01M sodium hypophosphite respectively. Inhibition of tungsten deposition in the presence of phosphorous was noted. SEM investigation showed cauliflower like growth along with few microcracks. The as-deposited Ni-W-P alloy coating was amorphous in nature as confirmed by XRD investigation and step-wise crystallization was noticed upon annealing at higher temperatures. For all the coatings, the nanohardness was found to increase after heat-treatment and typical nanonahardness values obtained for 400°C annealed samples were 18.65±0.20 GPa, 20.03±0.25 GPa, and 19.17±0.25 for alloy coatings Ni-W, Ni-W-LP and Ni-W-HP respectively. Therefore, the nanohardness data show very promising results. Wear and coefficient of friction data were recorded by applying a different normal load in reciprocating motion using a ball on plate geometry. Post experiment, the wear mechanism was established by detail investigation of wear-scar morphology. Potentiodynamic measurements showed coating with a high content of phosphorous was most corrosion resistant in 3.5wt% NaCl solution.Keywords: corrosion, electrodeposition, nanohardness, Ni-W-P alloy coating
Procedia PDF Downloads 348