Search results for: multiple users
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6875

Search results for: multiple users

5555 Semantic Differences between Bug Labeling of Different Repositories via Machine Learning

Authors: Pooja Khanal, Huaming Zhang

Abstract:

Labeling of issues/bugs, also known as bug classification, plays a vital role in software engineering. Some known labels/classes of bugs are 'User Interface', 'Security', and 'API'. Most of the time, when a reporter reports a bug, they try to assign some predefined label to it. Those issues are reported for a project, and each project is a repository in GitHub/GitLab, which contains multiple issues. There are many software project repositories -ranging from individual projects to commercial projects. The labels assigned for different repositories may be dependent on various factors like human instinct, generalization of labels, label assignment policy followed by the reporter, etc. While the reporter of the issue may instinctively give that issue a label, another person reporting the same issue may label it differently. This way, it is not known mathematically if a label in one repository is similar or different to the label in another repository. Hence, the primary goal of this research is to find the semantic differences between bug labeling of different repositories via machine learning. Independent optimal classifiers for individual repositories are built first using the text features from the reported issues. The optimal classifiers may include a combination of multiple classifiers stacked together. Then, those classifiers are used to cross-test other repositories which leads the result to be deduced mathematically. The produce of this ongoing research includes a formalized open-source GitHub issues database that is used to deduce the similarity of the labels pertaining to the different repositories.

Keywords: bug classification, bug labels, GitHub issues, semantic differences

Procedia PDF Downloads 180
5554 The Location of Park and Ride Facilities Using the Fuzzy Inference Model

Authors: Anna Lower, Michal Lower, Robert Masztalski, Agnieszka Szumilas

Abstract:

Contemporary cities are facing serious congestion and parking problems. In urban transport policy the introduction of the park and ride system (P&R) is an increasingly popular way of limiting vehicular traffic. The determining of P&R facilities location is a key aspect of the system. Criteria for assessing the quality of the selected location are formulated generally and descriptively. The research outsourced to specialists are expensive and time consuming. The most focus is on the examination of a few selected places. The practice has shown that the choice of the location of these sites in a intuitive way without a detailed analysis of all the circumstances, often gives negative results. Then the existing facilities are not used as expected. Methods of location as a research topic are also widely taken in the scientific literature. Built mathematical models often do not bring the problem comprehensively, e.g. assuming that the city is linear, developed along one important communications corridor. The paper presents a new method where the expert knowledge is applied to fuzzy inference model. With such a built system even a less experienced person could benefit from it, e.g. urban planners, officials. The analysis result is obtained in a very short time, so a large number of the proposed location can also be verified in a short time. The proposed method is intended for testing of car parks location in a city. The paper will show selected examples of locations of the P&R facilities in cities planning to introduce the P&R. The analysis of existing objects will also be shown in the paper and they will be confronted with the opinions of the system users, with particular emphasis on unpopular locations. The research are executed using the fuzzy inference model which was built and described in more detail in the earlier paper of the authors. The results of analyzes are compared to documents of P&R facilities location outsourced by the city and opinions of existing facilities users expressed on social networking sites. The research of existing facilities were conducted by means of the fuzzy model. The results are consistent with actual users feedback. The proposed method proves to be good, but does not require the involvement of a large experts team and large financial contributions for complicated research. The method also provides an opportunity to show the alternative location of P&R facilities. The performed studies show that the method has been confirmed. The method can be applied in urban planning of the P&R facilities location in relation to the accompanying functions. Although the results of the method are approximate, they are not worse than results of analysis of employed experts. The advantage of this method is ease of use, which simplifies the professional expert analysis. The ability of analyzing a large number of alternative locations gives a broader view on the problem. It is valuable that the arduous analysis of the team of people can be replaced by the model's calculation. According to the authors, the proposed method is also suitable for implementation on a GIS platform.

Keywords: fuzzy logic inference, park and ride system, P&R facilities, P&R location

Procedia PDF Downloads 315
5553 Improving Security by Using Secure Servers Communicating via Internet with Standalone Secure Software

Authors: Carlos Gonzalez

Abstract:

This paper describes the use of the Internet as a feature to enhance the security of our software that is going to be distributed/sold to users potentially all over the world. By placing in a secure server some of the features of the secure software, we increase the security of such software. The communication between the protected software and the secure server is done by a double lock algorithm. This paper also includes an analysis of intruders and describes possible responses to detect threats.

Keywords: internet, secure software, threats, cryptography process

Procedia PDF Downloads 310
5552 Hybrid Weighted Multiple Attribute Decision Making Handover Method for Heterogeneous Networks

Authors: Mohanad Alhabo, Li Zhang, Naveed Nawaz

Abstract:

Small cell deployment in 5G networks is a promising technology to enhance capacity and coverage. However, unplanned deployment may cause high interference levels and high number of unnecessary handovers, which in turn will result in an increase in the signalling overhead. To guarantee service continuity, minimize unnecessary handovers, and reduce signalling overhead in heterogeneous networks, it is essential to properly model the handover decision problem. In this paper, we model the handover decision according to Multiple Attribute Decision Making (MADM) method, specifically Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS). In this paper, we propose a hybrid TOPSIS method to control the handover in heterogeneous network. The proposed method adopts a hybrid weighting, which is a combination of entropy and standard deviation. A hybrid weighting control parameter is introduced to balance the impact of the standard deviation and entropy weighting on the network selection process and the overall performance. Our proposed method shows better performance, in terms of the number of frequent handovers and the mean user throughput, compared to the existing methods.

Keywords: handover, HetNets, interference, MADM, small cells, TOPSIS, weight

Procedia PDF Downloads 129
5551 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable

Authors: Xinyuan Y. Song, Kai Kang

Abstract:

Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.

Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data

Procedia PDF Downloads 128
5550 Effect of Climate Variability on Honeybee's Production in Ondo State, Nigeria

Authors: Justin Orimisan Ijigbade

Abstract:

The study was conducted to assess the effect of climate variability on honeybee’s production in Ondo State, Nigeria. Multistage sampling technique was employed to collect the data from 60 beekeepers across six Local Government Areas in Ondo State. Data collected were subjected to descriptive statistics and multiple regression model analyses. The results showed that 93.33% of the respondents were male with 80% above 40 years of age. Majority of the respondents (96.67%) had formal education and 90% produced honey for commercial purpose. The result revealed that 90% of the respondents admitted that low temperature as a result of long hours/period of rainfall affected the foraging efficiency of the worker bees, 73.33% claimed that long period of low humidity resulted in low level of nectar flow, while 70% submitted that high temperature resulted in improper composition of workers, dunes and queen in the hive colony. The result of multiple regression showed that beekeepers’ experience, educational level, access to climate information, temperature and rainfall were the main factors affecting honey bees production in the study area. Therefore, beekeepers should be given more education on climate variability and its adaptive strategies towards ensuring better honeybees production in the study area.

Keywords: climate variability, honeybees production, humidity, rainfall and temperature

Procedia PDF Downloads 253
5549 A Framework for Designing Complex Product-Service Systems with a Multi-Domain Matrix

Authors: Yoonjung An, Yongtae Park

Abstract:

Offering a Product-Service System (PSS) is a well-accepted strategy that companies may adopt to provide a set of systemic solutions to customers. PSSs were initially provided in a simple form but now take diversified and complex forms involving multiple services, products and technologies. With the growing interest in the PSS, frameworks for the PSS development have been introduced by many researchers. However, most of the existing frameworks fail to examine various relations existing in a complex PSS. Since designing a complex PSS involves full integration of multiple products and services, it is essential to identify not only product-service relations but also product-product/ service-service relations. It is also equally important to specify how they are related for better understanding of the system. Moreover, as customers tend to view their purchase from a more holistic perspective, a PSS should be developed based on the whole system’s requirements, rather than focusing only on the product requirements or service requirements. Thus, we propose a framework to develop a complex PSS that is coordinated fully with the requirements of both worlds. Specifically, our approach adopts a multi-domain matrix (MDM). A MDM identifies not only inter-domain relations but also intra-domain relations so that it helps to design a PSS that includes highly desired and closely related core functions/ features. Also, various dependency types and rating schemes proposed in our approach would help the integration process.

Keywords: inter-domain relations, intra-domain relations, multi-domain matrix, product-service system design

Procedia PDF Downloads 629
5548 Heroic Villains: An Exploration of the Use of Narrative Plotlines and Emerging Identities within Recovery Stories of Former Substance Abusers

Authors: Tria Moore Aimee Walker-Clarke

Abstract:

The purpose of the study was to develop a deeper understanding of how self-identity is negotiated and reconstructed by people in recovery from substance abuse. The approach draws on the notion that self-identity is constructed through stories. Specifically, dominant narratives of substance abuse involve the 'addict identity' in which the meaning of being an addict is constructed though social interaction and informed by broader social meanings of substance misuse, which are considered deviant. The addict is typically understood as out of control, weak and feckless. Users may unconsciously embody this addict identity which makes recovery less likely. Typical approaches to treatment employ the notion that recovery is much more likely when users change the way they think and feel about themselves by assembling a new identity. Recovery, therefore, involves a reconstruction of the self in a new light, which may mean rejecting a part of the self (the addict identity). One limitation is that previous research on this topic has been quantitative which, while useful, tells us little about how this process is best managed. Should one, for example, reject the past addict identity completely and move on to the new identity, or, is it more effective to accept the past identity and use this in the formation of the new non-user identity? The purpose of this research, then, is to explore how addicts in recovery have managed the transition between their past and current selves and whether this may inform therapeutic practice. Using a narrative approach, data were analyzed from five in-depth interviews with former addicts who had been abstinent for at least a year, and who were in some form of volunteering role at substance treatment services in the UK. Although participants' identified with a previous ‘addict identity,’ and made efforts to disassociate themselves from this, they also recognized that acceptance was an important part of reconstructing their new identity. The participants' narratives used familiar plot lines to structure their stories, in which they positioned themselves as the heroes in their own stories, rather than as victim of circumstance. Instead of rejecting their former addict identity, which would mean rejecting a part of the self, participants used their experience in a reconstructive and restorative way. The findings suggest that encouraging people to tell their story and accept their addict identity are important factors in successful recovery.

Keywords: addiction, identity, narrative, recovery, substance abuse

Procedia PDF Downloads 291
5547 Automatic Identification and Monitoring of Wildlife via Computer Vision and IoT

Authors: Bilal Arshad, Johan Barthelemy, Elliott Pilton, Pascal Perez

Abstract:

Getting reliable, informative, and up-to-date information about the location, mobility, and behavioural patterns of animals will enhance our ability to research and preserve biodiversity. The fusion of infra-red sensors and camera traps offers an inexpensive way to collect wildlife data in the form of images. However, extracting useful data from these images, such as the identification and counting of animals remains a manual, time-consuming, and costly process. In this paper, we demonstrate that such information can be automatically retrieved by using state-of-the-art deep learning methods. Another major challenge that ecologists are facing is the recounting of one single animal multiple times due to that animal reappearing in other images taken by the same or other camera traps. Nonetheless, such information can be extremely useful for tracking wildlife and understanding its behaviour. To tackle the multiple count problem, we have designed a meshed network of camera traps, so they can share the captured images along with timestamps, cumulative counts, and dimensions of the animal. The proposed method takes leverage of edge computing to support real-time tracking and monitoring of wildlife. This method has been validated in the field and can be easily extended to other applications focusing on wildlife monitoring and management, where the traditional way of monitoring is expensive and time-consuming.

Keywords: computer vision, ecology, internet of things, invasive species management, wildlife management

Procedia PDF Downloads 123
5546 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 315
5545 Targeting Calcium Dysregulation for Treatment of Dementia in Alzheimer's Disease

Authors: Huafeng Wei

Abstract:

Dementia in Alzheimer’s Disease (AD) is the number one cause of dementia internationally, without effective treatments. Increasing evidence suggest that disruption of intracellular calcium homeostasis, primarily pathological elevation of cytosol and mitochondria but reduction of endoplasmic reticulum (ER) calcium concentrations, play critical upstream roles on multiple pathologies and associated neurodegeneration, impaired neurogenesis, synapse, and cognitive dysfunction in various AD preclinical studies. The last federal drug agency (FDA) approved drug for AD dementia treatment, memantine, exert its therapeutic effects by ameliorating N-methyl-D-aspartate (NMDA) glutamate receptor overactivation and subsequent calcium dysregulation. More research works are needed to develop other drugs targeting calcium dysregulation at multiple pharmacological acting sites for future effective AD dementia treatment. Particularly, calcium channel blockers for the treatment of hypertension and dantrolene for the treatment of muscle spasm and malignant hyperthermia can be repurposed for this purpose. In our own research work, intranasal administration of dantrolene significantly increased its brain concentrations and durations, rendering it a more effective therapeutic drug with less side effects for chronic AD dementia treatment. This review summarizesthe progress of various studies repurposing drugs targeting calcium dysregulation for future effective AD dementia treatment as potentially disease-modifying drugs.

Keywords: alzheimer, calcium, cognitive dysfunction, dementia, neurodegeneration, neurogenesis

Procedia PDF Downloads 168
5544 Deep Reinforcement Learning Approach for Optimal Control of Industrial Smart Grids

Authors: Niklas Panten, Eberhard Abele

Abstract:

This paper presents a novel approach for real-time and near-optimal control of industrial smart grids by deep reinforcement learning (DRL). To achieve highly energy-efficient factory systems, the energetic linkage of machines, technical building equipment and the building itself is desirable. However, the increased complexity of the interacting sub-systems, multiple time-variant target values and stochastic influences by the production environment, weather and energy markets make it difficult to efficiently control the energy production, storage and consumption in the hybrid industrial smart grids. The studied deep reinforcement learning approach allows to explore the solution space for proper control policies which minimize a cost function. The deep neural network of the DRL agent is based on a multilayer perceptron (MLP), Long Short-Term Memory (LSTM) and convolutional layers. The agent is trained within multiple Modelica-based factory simulation environments by the Advantage Actor Critic algorithm (A2C). The DRL controller is evaluated by means of the simulation and then compared to a conventional, rule-based approach. Finally, the results indicate that the DRL approach is able to improve the control performance and significantly reduce energy respectively operating costs of industrial smart grids.

Keywords: industrial smart grids, energy efficiency, deep reinforcement learning, optimal control

Procedia PDF Downloads 179
5543 Resistance Spot Welding of Boron Steel 22MnB5 with Complex Welding Programs

Authors: Szymon Kowieski, Zygmunt Mikno

Abstract:

The study involved the optimization of process parameters during resistance spot welding of Al-coated martensitic boron steel 22MnB5, applied in hot stamping, performed using a programme with a multiple current impulse mode and a programme with variable pressure force. The aim of this research work was to determine the possibilities of a growth in welded joint strength and to identify the expansion of a welding lobe. The process parameters were adjusted on the basis of welding process simulation and confronted with experimental data. 22MnB5 steel is known for its tendency to obtain high hardness values in weld nuggets, often leading to interfacial failures (observed in the study-related tests). In addition, during resistance spot welding, many production-related factors can affect process stability, e.g. welding lobe narrowing, and lead to the deterioration of quality. Resistance spot welding performed using the above-named welding programme featuring 3 levels of force made it possible to achieve 82% of welding lobe extension. Joints made using the multiple current impulse program, where the total welding time was below 1.4s, revealed a change in a peeling mode (to full plug) and an increase in weld tensile shear strength of 10%.

Keywords: 22MnB5, hot stamping, interfacial fracture, resistance spot welding, simulation, single lap joint, welding lobe

Procedia PDF Downloads 370
5542 Pinch Technology for Minimization of Water Consumption at a Refinery

Authors: W. Mughees, M. Alahmad

Abstract:

Water is the most significant entity that controls local and global development. For the Gulf region, especially Saudi Arabia, with its limited potable water resources, the potential of the fresh water problem is highly considerable. In this research, the study involves the design and analysis of pinch-based water/wastewater networks. Multiple water/wastewater networks were developed using pinch analysis involving direct recycle/material recycle method. Property-integration technique was adopted to carry out direct recycle method. Particularly, a petroleum refinery was considered as a case study. In direct recycle methodology, minimum water discharge and minimum fresh water resource targets were estimated. Re-design (or retrofitting) of water allocation in the networks was undertaken. Chemical Oxygen Demand (COD) and hardness properties were taken as pollutants. This research was based on single and double contaminant approach for COD and hardness and the amount of fresh water was reduced from 340.0 m3/h to 149.0 m3/h (43.8%), 208.0 m3/h (61.18%) respectively. While regarding double contaminant approach, reduction in fresh water demand was 132.0 m3/h (38.8%). The required analysis was also carried out using mathematical programming technique. Operating software such as LINGO was used for these studies which have verified the graphical method results in a valuable and accurate way. Among the multiple water networks, the one possible water allocation network was developed based on mass exchange.

Keywords: minimization, water pinch, water management, pollution prevention

Procedia PDF Downloads 460
5541 Balance Control Mechanisms in Individuals With Multiple Sclerosis in Virtual Reality Environment

Authors: Badriah Alayidi, Emad Alyahya

Abstract:

Background: Most people with Multiple Sclerosis (MS) report worsening balance as the condition progresses. Poor balance control is also well known to be a significant risk factor for both falling and fear of falling. The increased risk of falls with disease progression thus makes balance control an essential target of gait rehabilitation amongst people with MS. Intervention programs have developed various methods to improve balance control, and accumulating evidence suggests that exercise programs may help people with MS improve their balance. Among these methods, virtual reality (VR) is growing in popularity as a balance-training technique owing to its potential benefits, including better compliance and greater user happiness. However, it is not clear if a VR environment will induce different balance control mechanisms in MS as compared to healthy individuals or traditional environments. Therefore, this study aims to examine how individuals with MS control their balance in a VR setting. Methodology: The proposed study takes an empirical approach to estimate and determine the role of balance response in persons with MS using a VR environment. It will use primary data collected through patient observations, physiological and biomechanical evaluation of balance, and data analysis. Results: The preliminary systematic review and meta-analysis indicated that there was variability in terms of the outcome assessing balance response in people with MS. The preliminary results of these assessments have the potential to provide essential indicators of the progression of MS and contribute to the individualization of treatment and evaluation of the interventions’ effectiveness. The literature describes patients who have had the opportunity to experiment in VR settings and then used what they have learned in the real world, suggesting that this VR setting could be more appealing than conditional settings. The findings of the proposed study will be beneficial in estimating and determining the effect of VR on balance control in persons with MS. In previous studies, VR was shown to be an interesting approach to neurological rehabilitation, but more data are needed to support this approach in MS. Conclusions: The proposed study enables an assessment of balance and evaluations of a variety of physiological implications related to neural activity as well as biomechanical implications related to movement analysis.

Keywords: multiple sclerosis, virtual reality, postural control, balance

Procedia PDF Downloads 58
5540 Encephalon-An Implementation of a Handwritten Mathematical Expression Solver

Authors: Shreeyam, Ranjan Kumar Sah, Shivangi

Abstract:

Recognizing and solving handwritten mathematical expressions can be a challenging task, particularly when certain characters are segmented and classified. This project proposes a solution that uses Convolutional Neural Network (CNN) and image processing techniques to accurately solve various types of equations, including arithmetic, quadratic, and trigonometric equations, as well as logical operations like logical AND, OR, NOT, NAND, XOR, and NOR. The proposed solution also provides a graphical solution, allowing users to visualize equations and their solutions. In addition to equation solving, the platform, called CNNCalc, offers a comprehensive learning experience for students. It provides educational content, a quiz platform, and a coding platform for practicing programming skills in different languages like C, Python, and Java. This all-in-one solution makes the learning process engaging and enjoyable for students. The proposed methodology includes horizontal compact projection analysis and survey for segmentation and binarization, as well as connected component analysis and integrated connected component analysis for character classification. The compact projection algorithm compresses the horizontal projections to remove noise and obtain a clearer image, contributing to the accuracy of character segmentation. Experimental results demonstrate the effectiveness of the proposed solution in solving a wide range of mathematical equations. CNNCalc provides a powerful and user-friendly platform for solving equations, learning, and practicing programming skills. With its comprehensive features and accurate results, CNNCalc is poised to revolutionize the way students learn and solve mathematical equations. The platform utilizes a custom-designed Convolutional Neural Network (CNN) with image processing techniques to accurately recognize and classify symbols within handwritten equations. The compact projection algorithm effectively removes noise from horizontal projections, leading to clearer images and improved character segmentation. Experimental results demonstrate the accuracy and effectiveness of the proposed solution in solving a wide range of equations, including arithmetic, quadratic, trigonometric, and logical operations. CNNCalc features a user-friendly interface with a graphical representation of equations being solved, making it an interactive and engaging learning experience for users. The platform also includes tutorials, testing capabilities, and programming features in languages such as C, Python, and Java. Users can track their progress and work towards improving their skills. CNNCalc is poised to revolutionize the way students learn and solve mathematical equations with its comprehensive features and accurate results.

Keywords: AL, ML, hand written equation solver, maths, computer, CNNCalc, convolutional neural networks

Procedia PDF Downloads 100
5539 Assessment of Pull Mechanism at Enhancing Maize Farmers’ Utilisation of Aflasafe Bio-Control Measures in Oyo State, Nigeria

Authors: Jonathan A. Akinwale, Ibukun J. Agotola

Abstract:

There is a need to rethink how technology is being disseminated to end users in order to ensure wide adoption and utilisation. Aflasafe bio-control was developed to combat aflatoxin in maize to ensure food safety for the end users. This study was designed to assess how the pull mechanism is enhancing the utilisation of this proven technology among maize farmers in Oyo State, Nigeria. The study determines the awareness of farmers on Aflasafe, sources of purchase of Aflasafe, incentives towards the usage of Aflasafe, constraints to farmers’ utilisation and factors influencing farmers’ utilisation of Aflasafe bio-control measures. Respondents were selected using a multi-stage sampling procedure. Data were collected from respondents through interview schedule and analyzed using descriptive statistics (means, frequencies, and percentages) and inferential statistics (Pearson Product Moment Correlation and regression analysis). The result showed that 89% of the farmers indicated implementers as the outlet for the purchase of Aflasafe. Also, premium payment and provision of technical assistance were the highly ranked incentives to the utilisation of Aflasafe among the farmers. The study also revealed that the major constraints face by respondents were low access to credit facility, inadequate sources of purchase, and lack of storage facilities. A little above half (54%) of the farmers were found to have fully utilized Aflasafe in maize production. Pearson Product Moment Correlation (PPMC) analysis revealed that there was a significant correlation between incentives and utilisation of Aflasafe (r-value=0.274; p ≤ 0.01). The result of the regression analysis indicated maize production experience (β=0.572), output (β=0.531), years of formal education (β=0.404) and household size (β=0.391) as the leading factors influencing farmers utilisation of Aflasafe bio-control in maize production. The study, therefore, recommends that governments and non-governmental organisations should be interested in making Aflasafe available to the maize farmers either through loan provision or price subsidy.

Keywords: Aflasafe bio-control, maize production, production incentives, pull mechanism, utilisation

Procedia PDF Downloads 113
5538 Optimizing Usability Testing with Collaborative Method in an E-Commerce Ecosystem

Authors: Markandeya Kunchi

Abstract:

Usability testing (UT) is one of the vital steps in the User-centred design (UCD) process when designing a product. In an e-commerce ecosystem, UT becomes primary as new products, features, and services are launched very frequently. And, there are losses attached to the company if an unusable and inefficient product is put out to market and is rejected by customers. This paper tries to answer why UT is important in the product life-cycle of an E-commerce ecosystem. Secondary user research was conducted to find out work patterns, development methods, type of stakeholders, and technology constraints, etc. of a typical E-commerce company. Qualitative user interviews were conducted with product managers and designers to find out the structure, project planning, product management method and role of the design team in a mid-level company. The paper tries to address the usual apprehensions of the company to inculcate UT within the team. As well, it stresses upon factors like monetary resources, lack of usability expert, narrow timelines, and lack of understanding of higher management as some primary reasons. Outsourcing UT to vendors is also very prevalent with mid-level e-commerce companies, but it has its own severe repercussions like very little team involvement, huge cost, misinterpretation of the findings, elongated timelines, and lack of empathy towards the customer, etc. The shortfalls of the unavailability of a UT process in place within the team and conducting UT through vendors are bad user experiences for customers while interacting with the product, badly designed products which are neither useful and nor utilitarian. As a result, companies see dipping conversions rates in apps and websites, huge bounce rates and increased uninstall rates. Thus, there was a need for a more lean UT system in place which could solve all these issues for the company. This paper highlights on optimizing the UT process with a collaborative method. The degree of optimization and structure of collaborative method is the highlight of this paper. Collaborative method of UT is one in which the centralised design team of the company takes for conducting and analysing the UT. The UT is usually a formative kind where designers take findings into account and uses in the ideation process. The success of collaborative method of UT is due to its ability to sync with the product management method employed by the company or team. The collaborative methods focus on engaging various teams (design, marketing, product, administration, IT, etc.) each with its own defined roles and responsibility in conducting a smooth UT with users In-house. The paper finally highlights the positive results of collaborative UT method after conducting more than 100 In-lab interviews with users across the different lines of businesses. Some of which are the improvement of interaction between stakeholders and the design team, empathy towards users, improved design iteration, better sanity check of design solutions, optimization of time and money, effective and efficient design solution. The future scope of collaborative UT is to make this method leaner, by reducing the number of days to complete the entire project starting from planning between teams to publishing the UT report.

Keywords: collaborative method, e-commerce, product management method, usability testing

Procedia PDF Downloads 104
5537 Tourist Behavior Towards Blockchain-Based Payments

Authors: A. Šapkauskienė, A. Mačerinskienė, R. Andrulienė, R. Bruzgė, S. Masteika, K. Driaunys

Abstract:

The COVID-19 pandemic has affected not only world markets and economies but also the daily lives of customers and their payment habits. The pandemic has accelerated the digital transformation, so the role of technology will become even more important post-COVID. Although the popularity of cryptocurrencies has reached unprecedented heights, there are still obstacles, such as a lack of consumer experience and distrust of these technologies, so exploring the role of cryptocurrency and blockchain in the context of international travel becomes extremely important. Research on tourists’ intentions to use cryptocurrencies for payment purposes is limited due to the small number of research studies. To fill this research gap, an exploratory study based on the analysis of survey data was conducted. The purpose of the research is to explore how the behavior of tourists has changed making their financial transactions when paying for the tourism services in order to determine the intention to pay in cryptocurrencies. Behavioral intention can be examined as a dependent variable that is useful for the study of the acceptance of blockchain as cutting-edge technology. Therefore, this study examines the intention of travelers to use cryptocurrencies in electronic payments for tourism services. Several studies have shown that the intention to accept payments in a cryptocurrency is affected by the perceived usefulness of these payments and the perceived ease of use. The findings deepen our understanding of the readiness of service users to apply for blockchain-based payment in the tourism sector. The tourism industry has to focus not only on the technology but on consumers who can use cryptocurrencies, creating new possibilities and increasing business competitiveness. Based on research results, suggestions are made to guide future research on the use of cryptocurrencies by tourists in the tourism industry. Therefore, in line with the rapid expansion of virtual currency users, market capitalization, and payment in cryptographic currencies, it is necessary to explore the possibilities of implementing a blockchain-based system aiming to promote the use of services in the tourism sector as the most affected by the pandemic.

Keywords: behavioral intention, blockchain-based payment, cryptocurrency, tourism

Procedia PDF Downloads 96
5536 Artificial Intelligence in Management Simulators

Authors: Nuno Biga

Abstract:

Artificial Intelligence (AI) allows machines to interpret information and learn from context analysis, giving them the ability to make predictions adjusted to each specific situation. In addition to learning by performing deterministic and probabilistic calculations, the 'artificial brain' also learns through information and data provided by those who train it, namely its users. The "Assisted-BIGAMES" version of the Accident & Emergency (A&E) simulator introduces the concept of a "Virtual Assistant" (VA) that provides users with useful suggestions, namely to pursue the following operations: a) to relocate workstations in order to shorten travelled distances and minimize the stress of those involved; b) to identify in real time the bottleneck(s) in the operations system so that it is possible to quickly act upon them; c) to identify resources that should be polyvalent so that the system can be more efficient; d) to identify in which specific processes it may be advantageous to establish partnership with other teams; and e) to assess possible solutions based on the suggested KPIs allowing action monitoring to guide the (re)definition of future strategies. This paper is built on the BIGAMES© simulator and presents the conceptual AI model developed in a pilot project. Each Virtual Assisted BIGAME is a management simulator developed by the author that guides operational and strategic decision making, providing users with useful information in the form of management recommendations that make it possible to predict the actual outcome of different alternative management strategic actions. The pilot project developed incorporates results from 12 editions of the BIGAME A&E that took place between 2017 and 2022 at AESE Business School, based on the compilation of data that allows establishing causal relationships between decisions taken and results obtained. The systemic analysis and interpretation of this information is materialised in the Assisted-BIGAMES through a computer application called "BIGAMES Virtual Assistant" that players can use during the Game. Each participant in the Virtual Assisted-BIGAMES permanently asks himself about the decisions he should make during the game in order to win the competition. To this end, the role of the VA of each team consists in guiding the players to be more effective in their decision making through presenting recommendations based on AI methods. It is important to note that the VA's suggestions for action can be accepted or rejected by the managers of each team, and as the participants gain a better understanding of the game, they will more easily dispense with the VA's recommendations and rely more on their own experience, capability, and knowledge to support their own decisions. Preliminary results show that the introduction of the VA provides a faster learning of the decision-making process. The facilitator (Serious Game Controller) is responsible for supporting the players with further analysis and the recommended action may be (or not) aligned with the previous recommendations of the VA. All the information should be jointly analysed and assessed by each player, who are expected to add “Emotional Intelligence”, a component absent from the machine learning process.

Keywords: artificial intelligence (AI), gamification, key performance indicators (KPI), machine learning, management simulators, serious games, virtual assistant

Procedia PDF Downloads 87
5535 The Actoprotective Efficiency of Pyrimidine Derivatives

Authors: Nail Nazarov, Vladimir Zobov, Alexandra Vyshtakalyuk, Vyacheslav Semenov, Irina Galyametdinova, Vladimir Reznik

Abstract:

There have been studied effects of xymedon and six new pyrimidine derivatives, that are close and distant analogs of xymedon, on rats' working capacity in the test 'swimming to failure'. It has been shown that a single administration of the studied compounds did not have a statistically significant effect in the test. In the conditions of multiple intraperitoneal administration of the studied pyrimidine derivatives, the compound L-ascorbate, 1-(2-hydroxyethyl)-4.6-dimethyl-1.2-dihydropyrimidine-2-one had the lowest toxicity and the most pronounced actoprotective effect. Introduction in the dose of 20 mg/kg caused a statistically significant increase 440 % in the duration of swimming of rats on the 14th day of the experiment compared with the control group. Multiple administration of the compound in the conditions of physical load did not affect leucopoiesis but stimulates erythropoiesis resulting in an increase in the number of erythrocytes and a hemoglobin level. The substance introduction under mixed exhausting loads prevented such changes of blood biochemical parameters as reduction of glucose, increased of urea and lactic acid levels, what indicates improvement in the animals' tolerability of loads and an anti-catabolic effect of the compound. Absence of hepato and cardiotoxic effects of the substance has been shown. This work was performed with the financial support of Russian Science Foundation (grant № 14-50-00014).

Keywords: actoprotectors, physical working capacity, pyrimidine derivatives, xymedon

Procedia PDF Downloads 279
5534 Lennox-gastaut Syndrome Associated with Dysgenesis of Corpus Callosum

Authors: A. Bruce Janati, Muhammad Umair Khan, Naif Alghassab, Ibrahim Alzeir, Assem Mahmoud, M. Sammour

Abstract:

Rationale: Lennox-Gastaut syndrome(LGS) is an electro-clinical syndrome composed of the triad of mental retardation, multiple seizure types, and the characteristic generalized slow spike-wave complexes in the EEG. In this article, we report on two patients with LGS whose brain MRI showed dysgenesis of corpus callosum(CC). We review the literature and stress the role of CC in the genesis of secondary bilateral synchrony(SBS). Method: This was a clinical study conducted at King Khalid Hospital. Results: The EEG was consistent with LGS in patient 1 and unilateral slow spike-wave complexes in patient 2. The MRI showed hypoplasia of the splenium of CC in patient 1, and global hypoplasia of CC combined with Joubert syndrome in patient 2. Conclusion: Based on the data, we proffer the following hypotheses: 1-Hypoplasia of CC interferes with functional integrity of this structure. 2-The genu of CC plays a pivotal role in the genesis of secondary bilateral synchrony. 3-Electrodecremental seizures in LGS emanate from pacemakers generated in the brain stem, in particular the mesencephalon projecting abnormal signals to the cortex via thalamic nuclei. 4-Unilateral slow spike-wave complexes in the context of mental retardation and multiple seizure types may represent a variant of LGS, justifying neuroimaging studies.

Keywords: EEG, Lennox-Gastaut syndrome, corpus callosum , MRI

Procedia PDF Downloads 431
5533 A Transformer-Based Approach for Multi-Human 3D Pose Estimation Using Color and Depth Images

Authors: Qiang Wang, Hongyang Yu

Abstract:

Multi-human 3D pose estimation is a challenging task in computer vision, which aims to recover the 3D joint locations of multiple people from multi-view images. In contrast to traditional methods, which typically only use color (RGB) images as input, our approach utilizes both color and depth (D) information contained in RGB-D images. We also employ a transformer-based model as the backbone of our approach, which is able to capture long-range dependencies and has been shown to perform well on various sequence modeling tasks. Our method is trained and tested on the Carnegie Mellon University (CMU) Panoptic dataset, which contains a diverse set of indoor and outdoor scenes with multiple people in varying poses and clothing. We evaluate the performance of our model on the standard 3D pose estimation metrics of mean per-joint position error (MPJPE). Our results show that the transformer-based approach outperforms traditional methods and achieves competitive results on the CMU Panoptic dataset. We also perform an ablation study to understand the impact of different design choices on the overall performance of the model. In summary, our work demonstrates the effectiveness of using a transformer-based approach with RGB-D images for multi-human 3D pose estimation and has potential applications in real-world scenarios such as human-computer interaction, robotics, and augmented reality.

Keywords: multi-human 3D pose estimation, RGB-D images, transformer, 3D joint locations

Procedia PDF Downloads 63
5532 Indoor Air Pollution and Reduced Lung Function in Biomass Exposed Women: A Cross Sectional Study in Pune District, India

Authors: Rasmila Kawan, Sanjay Juvekar, Sandeep Salvi, Gufran Beig, Rainer Sauerborn

Abstract:

Background: Indoor air pollution especially from the use of biomass fuels, remains a potentially large global health threat. The inefficient use of such fuels in poorly ventilated conditions results in high levels of indoor air pollution, most seriously affecting women and young children. Objectives: The main aim of this study was to measure and compare the lung function of the women exposed in the biomass fuels and LPG fuels and relate it to the indoor emission measured using a structured questionnaire, spirometer and filter based low volume samplers respectively. Methodology: This cross-sectional comparative study was conducted among the women (aged > 18 years) living in rural villages of Pune district who were not diagnosed of chronic pulmonary diseases or any other respiratory diseases and using biomass fuels or LPG for cooking for a minimum period of 5 years or more. Data collection was done from April to June 2017 in dry season. Spirometer was performed using the portable, battery-operated ultrasound Easy One spirometer (Spiro bank II, NDD Medical Technologies, Zurich, Switzerland) to determine the lung function over Forced expiratory volume. The primary outcome variable was forced expiratory volume in 1 second (FEV1). Secondary outcome was chronic obstruction pulmonary disease (post bronchodilator FEV1/ Forced Vital Capacity (FVC) < 70%) as defined by the Global Initiative for Obstructive Lung Disease. Potential confounders such as age, height, weight, smoking history, occupation, educational status were considered. Results: Preliminary results showed that the lung function of the women using Biomass fuels (FEV1/FVC = 85% ± 5.13) had comparatively reduced lung function than the LPG users (FEV1/FVC = 86.40% ± 5.32). The mean PM 2.5 mass concentration in the biomass user’s kitchen was 274.34 ± 314.90 and 85.04 ± 97.82 in the LPG user’s kitchen. Black carbon amount was found higher in the biomass users (black carbon = 46.71 ± 46.59 µg/m³) than LPG users (black carbon=11.08 ± 22.97 µg/m³). Most of the houses used separate kitchen. Almost all the houses that used the clean fuel like LPG had minimum amount of the particulate matter 2.5 which might be due to the background pollution and cross ventilation from the houses using biomass fuels. Conclusions: Therefore, there is an urgent need to adopt various strategies to improve indoor air quality. There is a lacking of current state of climate active pollutants emission from different stove designs and identify major deficiencies that need to be tackled. Moreover, the advancement in research tools, measuring technique in particular, is critical for researchers in developing countries to improve their capability to study the emissions for addressing the growing climate change and public health concerns.

Keywords: black carbon, biomass fuels, indoor air pollution, lung function, particulate matter

Procedia PDF Downloads 162
5531 Synthesis and Characterization of New Thermotropic Monomers – Containing Phosphorus

Authors: Diana Serbezeanu, Ionela-Daniela Carja, Tachita Vlad-Bubulac, Sergiu Sova

Abstract:

New phosphorus-containing monomers having methoxy end functional groups were prepared from methyl 4-hydroxybenzoate and two different dichlorides with phosphorus, namely phenyl phosphonic dichloride and phenyl dichlorophosphate. The structures of the monomers were confirmed by FTIR and NMR spectroscopy. The assignments for the 1H, 13C and 31P chemical shifts are based on 1D and 2D NMR homo- and heteronuclear correlations (H,H-COSY (Correlation Spectroscopy), H,C-HMQC (Heteronuclear Multiple Quantum Correlation and H,C-HMBC (Heteronuclear Multiple Bond Correlation)) and 31P-13C couplings. The monomers exhibited good solubility in common organic solvents. Dimethyl sulfoxide was to be a good solvent to grow crystals of considerable size which were investigated by X-ray analysis. One of these two new monomers presented thermotropic liquid crystalline behaviour, as revealed by differential scanning calorimetry (DSC), polarized light microscopy (PLM) and X-ray diffraction (XRD). The transition temperature from crystal to liquid crystalline state (K→LC) was 143°C and from the LC to isotropic state (LC→I) was 167°C. Upon heating, bis(4-(methoxycarbonyl)phenyl formed fine textures, difficult to be ascribed to smectic or nematic phases. Upon cooling from the isotropic state, bis(4-(methoxycarbonyl)phenyl exhibited a mosaic-type texture. X-ray diffraction measurements at small angles (SAXS) of bis(4-(methoxycarbonyl)phenyl showed two peaks at 1.8 Å and 3.5 Å, respectively suggesting organization at supramolecular level.

Keywords: phosphorus-containing monomers, polarized light microscopy, structure investigation, thermotropic liquid crystalline properties

Procedia PDF Downloads 288
5530 The Relationship between First-Day Body Temperature and Mortality in Traumatic Patients

Authors: Neda Valizadeh, Mani Mofidi, Sama Haghighi, Ali Hashemaghaee, Soudabeh Shafiee Ardestani

Abstract:

Background: There are many systems and parameters to evaluate trauma patients in the emergency department. Most of these evaluations are to distinguish patients with worse conditions so that the care systems have a better prediction of condition for a better care-giving. The purpose of this study is to determine the relationship between axillary body temperature and mortality in patients hospitalized in the intensive care unit (ICU) with multiple traumas and with other clinical and para-clinical factors. Methods: All patients between 16 and 75 years old with multiple traumas who were admitted into Emergency Department then hospitalized in the ICU were included in our study. An axillary temperature in the first and the second day of admission, Glasgow cola scale (GCS), systolic blood pressure, Serum glucose levels, and white blood cell counts of all patients at the admission day were recorded and their relationship with mortality were analyzed by SPSS software with suitable statistical tests. Results: Axillary body temperatures in the first and second day were statistically lower in expired traumatic patients (p=0.001 and p<0,001 respectively). Patients with lower GCS had a significantly lower first-day temperature and a significantly higher mortality. (p=0.006 and p=0.006 respectively). Furthermore, the first-day axillary temperature was significantly lower in patients with a lower first-day systolic blood pressure (p=0.014). Conclusion: Our results showed that lower axillary body temperature in the first day is associated with higher mortality, lower GCS, and lower systolic blood pressure. Thus, this could be used as a predictor of mortality in evaluation of traumatic patients in emergency settings.

Keywords: fever, trauma, mortality, emergency

Procedia PDF Downloads 359
5529 Neurocognitive and Executive Function in Cocaine Addicted Females

Authors: Gwendolyn Royal-Smith

Abstract:

Cocaine ranks as one of the world’s most addictive and commonly abused stimulant drugs. Recent evidence indicates that the abuse of cocaine has risen so quickly among females that this group now accounts for about 40 percent of all users in the United States. Neuropsychological studies have demonstrated that specific neural activation patterns carry higher risks for neurocognitive and executive function in cocaine addicted females thereby increasing their vulnerability for poorer treatment outcomes and more frequent post-treatment relapse when compared to males. This study examined secondary data with a convenience sample of 164 cocaine addicted male and females to assess neurocognitive and executive function. The principal objective of this study was to assess whether individual performance on the Stroop Word Color Task is predictive of treatment success by gender. A second objective of the study evaluated whether individual performance employing neurocognitive measures including the Stroop Word-Color task, the Rey Auditory Verbal Learning Test (RALVT), the Iowa Gambling Task, the Wisconsin Card Sorting Task (WISCT), the total score from the Barratte Impulsiveness Scale (Version 11) (BIS-11) and the total score from the Frontal Systems Behavioral Scale (FrSBE) test demonstrated differences in neurocognitive and executive function performance by gender. Logistic regression models were employed utilizing a covariate adjusted model application. Initial analyses of the Stroop Word color tasks indicated significant differences in the performance of males and females, with females experiencing more challenges in derived interference reaction time and associate recall ability. In early testing including the Rey Auditory Verbal Learning Test (RALVT), the number of advantageous vs disadvantageous cards from the Iowa Gambling Task, the number of perseverance errors from the Wisconsin Card Sorting Task (WISCT), the total score from the Barratte Impulsiveness Scale (Version 11) (BIS-11) and the total score from the Frontal Systems Behavioral Scale, results were mixed with women scoring lower in multiple indicators in both neurocognitive and executive function.

Keywords: cocaine addiction, gender, neuropsychology, neurocognitive, executive function

Procedia PDF Downloads 385
5528 Using Multiple Strategies to Improve the Nursing Staff Edwards Lifesciences Hemodynamic Monitoring Correctness of Operation

Authors: Hsin-Yi Lo, Huang-Ju Jiun, Yu-Chiao Chu

Abstract:

Hemodynamic monitoring is an important in the intensive care unit. Advances in medical technology in recent years, more diversification of intensive care equipment, there are many kinds of instruments available for monitoring of hemodynamics, Edwards Lifesciences Hemodynamic Monitoring (FloTrac) is one of them. The recent medical safety incidents in parameters were changed, nurses have not to notify doctor in time, therefore, it is hoped to analyze the current problems and find effective improvement strategies. In August 2021, the survey found that only 74.0% of FloTrac correctness of operation, reasons include lack of education, the operation manual is difficulty read, lack of audit mechanism, nurse doesn't know those numerical changes need to notify doctor, work busy omission, unfamiliar with operation and have many nursing records then omissions. Improvement methods include planning professional nurse education, formulate the secret arts of FloTrac, enacting an audit mechanism, establish FloTrac action learning, make「follow the sun」care map, hold simulated training and establish monitoring data automatically upload nursing records. After improvement, FloTrac correctness of operation increased to 98.8%. The results are good, implement to the ICU of the hospital.

Keywords: hemodynamic monitoring, edwards lifesciences hemodynamic monitoring, multiple strategies, intensive care

Procedia PDF Downloads 67
5527 Study on the Stability of Large Space Expandable Parabolic Cylindrical Antenna

Authors: Chuanzhi Chen, Wenjing Yu

Abstract:

Parabolic cylindrical deployable antenna has the characteristics of wide cutting width, strong directivity, high gain, and easy automatic beam scanning. While, due to its large size, high flexibility, and strong coupling, the deployment process of parabolic cylindrical deployable antenna presents such problems as unsynchronized deployment speed, large local deformation and discontinuous switching of deployment state. A large deployable parabolic cylindrical antenna is taken as the research object, and the problem of unfolding process instability of cylindrical antenna is studied in the paper, which is caused by multiple factors such as multiple closed loops, elastic deformation, motion friction, and gap collision. Firstly, the multi-flexible system dynamics model of large-scale parabolic cylindrical antenna is established to study the influence of friction and elastic deformation on the stability of large multi-closed loop antenna. Secondly, the evaluation method of antenna expansion stability is studied, and the quantitative index of antenna configuration design is proposed to provide a theoretical basis for improving the overall performance of the antenna. Finally, through simulation analysis and experiment, the development dynamics and stability of large-scale parabolic cylindrical antennas are verified by in-depth analysis, and the principles for improving the stability of antenna deployment are summarized.

Keywords: multibody dynamics, expandable parabolic cylindrical antenna, stability, flexible deformation

Procedia PDF Downloads 133
5526 Investigation of Delivery of Triple Play Data in GE-PON Fiber to the Home Network

Authors: Ashima Anurag Sharma

Abstract:

Optical fiber based networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This research paper is targeted to show the simultaneous delivery of triple play service (data, voice, and video). The comparison between various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be decreases due to increase in bit error rate.

Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT

Procedia PDF Downloads 512