Search results for: data bank
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24926

Search results for: data bank

20576 A Novel Machine Learning Approach to Aid Agrammatism in Non-fluent Aphasia

Authors: Rohan Bhasin

Abstract:

Agrammatism in non-fluent Aphasia Cases can be defined as a language disorder wherein a patient can only use content words ( nouns, verbs and adjectives ) for communication and their speech is devoid of functional word types like conjunctions and articles, generating speech of with extremely rudimentary grammar . Past approaches involve Speech Therapy of some order with conversation analysis used to analyse pre-therapy speech patterns and qualitative changes in conversational behaviour after therapy. We describe this approach as a novel method to generate functional words (prepositions, articles, ) around content words ( nouns, verbs and adjectives ) using a combination of Natural Language Processing and Deep Learning algorithms. The applications of this approach can be used to assist communication. The approach the paper investigates is : LSTMs or Seq2Seq: A sequence2sequence approach (seq2seq) or LSTM would take in a sequence of inputs and output sequence. This approach needs a significant amount of training data, with each training data containing pairs such as (content words, complete sentence). We generate such data by starting with complete sentences from a text source, removing functional words to get just the content words. However, this approach would require a lot of training data to get a coherent input. The assumptions of this approach is that the content words received in the inputs of both text models are to be preserved, i.e, won't alter after the functional grammar is slotted in. This is a potential limit to cases of severe Agrammatism where such order might not be inherently correct. The applications of this approach can be used to assist communication mild Agrammatism in non-fluent Aphasia Cases. Thus by generating these function words around the content words, we can provide meaningful sentence options to the patient for articulate conversations. Thus our project translates the use case of generating sentences from content-specific words into an assistive technology for non-Fluent Aphasia Patients.

Keywords: aphasia, expressive aphasia, assistive algorithms, neurology, machine learning, natural language processing, language disorder, behaviour disorder, sequence to sequence, LSTM

Procedia PDF Downloads 149
20575 Determining the Effectiveness of Dialectical Behavior Therapy in Reducing the Psychopathic Deviance of Criminals

Authors: Setareh Gerayeli

Abstract:

The present study tries to determine the effectiveness of dialectical behavior therapy in reducing the psychopathic deviance of employed criminals released from prison. The experimental method was used in this study, and the statistical population included employed criminals released from prison in Mashhad. Thirty offenders were selected randomly as the samples of the study. The MMPI-2 was used to collect data in the pre-test and post-test stages. The behavioral therapy was conducted on the experimental group during fourteen two and a half hour sessions, while the control group did not receive any intervention. Data analysis was conducted by using covariance. The results showed there is a significant difference between the post-test mean scores of the two groups. The findings suggest that dialectical behavior therapy is effective in reducing psychopathic deviance.

Keywords: criminals, dialectical behavior therapy, psychopathic deviance, prison

Procedia PDF Downloads 219
20574 A False Introduction: Teaching in a Pandemic

Authors: Robert Michael, Kayla Tobin, William Foster, Rachel Fairchild

Abstract:

The COVID-19 pandemic has caused significant disruptions in education, particularly in the teaching of health and physical education (HPE). This study examined a cohort of teachers that experienced being a preservice and first-year teacher during various stages of the pandemic. Qualitative data collection was conducted by interviewing six teachers from different schools in the Eastern U.S. over a series of structured interviews. Thematic analysis was employed to analyze the data. The pandemic significantly impacted the way HPE was taught as schools shifted to virtual and hybrid models. Findings revealed five major themes: (a) You want me to teach HOW?, (b) PE without equipment and six feet apart, (c) Behind the Scenes, (d) They’re back…I became a behavior management guru, and (e) The Pandemic Crater. Overall, this study highlights the significant challenges faced by preservice and first-year teachers in teaching physical education during the pandemic and underscores the need for ongoing support and resources to help them adapt and succeed in these challenging circumstances.

Keywords: teacher education, preservice teachers, first year teachers, health and physical education

Procedia PDF Downloads 160
20573 The Factors That Influence the Self-Sufficiency and the Self-Efficacy Levels among Oncology Patients

Authors: Esra Danaci, Tugba Kavalali Erdogan, Sevil Masat, Selin Keskin Kiziltepe, Tugba Cinarli, Zeliha Koc

Abstract:

This study was conducted in a descriptive and cross-sectional manner to determine that factors that influence the self-efficacy and self-sufficiency levels among oncology patients. The research was conducted between January 24, 2017 and September 24, 2017 in the oncology and hematology departments of a university hospital in Turkey with 179 voluntary inpatients. The data were collected through the Self-Sufficiency/Self-Efficacy Scale and a 29-question survey, which was prepared in order to determine the sociodemographic and clinical properties of the patients. The Self-Sufficiency/Self-Efficacy Scale is a Likert-type scale with 23 articles. The scale scores range between 23 and 115. A high final score indicates a good self-sufficiency/self-efficacy perception for the individual. The data were analyzed using percentage analysis, one-way ANOVA, Mann Whitney U-test, Kruskal Wallis test and Tukey test. The demographic data of the subjects were as follows: 57.5% were male and 42.5% were female, 82.7% were married, 46.4% were primary school graduate, 36.3% were housewives, 19% were employed, 93.3% had social security, 52.5% had matching expenses and incomes, 49.2% lived in the center of the city. The mean age was 57.1±14.6. It was determined that 22.3% of the patients had lung cancer, 19.6% had leukemia, and 43.6% had a good overall condition. The mean self-sufficiency/self-efficacy score was 83,00 (41-115). It was determined that the patients' self-sufficiency/self-efficacy scores were influenced by some of their socio-demographic and clinical properties. This study has found that the patients had high self-sufficiency/self-efficacy scores. It is recommended that the nursing care plans should be developed to improve their self-sufficiency/self-efficacy levels in the light of the patients' sociodemographic and clinical properties.

Keywords: oncology, patient, self-efficacy, self-sufficiency

Procedia PDF Downloads 155
20572 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data

Authors: Georgiana Onicescu, Yuqian Shen

Abstract:

Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.

Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection

Procedia PDF Downloads 123
20571 Applications of Greenhouse Data in Guatemala in the Analysis of Sustainability Indicators

Authors: Maria A. Castillo H., Andres R. Leandro, Jose F. Bienvenido B.

Abstract:

In 2015, Guatemala officially adopted the Sustainable Development Goals (SDG) according to the 2030 Agenda agreed by the United Nations Organization. In 2016, these objectives and goals were reviewed, and the National Priorities were established within the K'atún 2032 National Development Plan. In 2019 and 2021, progress was evaluated with 120 defined indicators, and the need to improve quality and availability of statistical data necessary for the analysis of sustainability indicators was detected, so the values to be reached in 2024 and 2032 were adjusted. The need for greater agricultural technology is one of the priorities established within SDG 2 "Zero Hunger". Within this area, protected agricultural production provides greater productivity throughout the year, reduces the use of chemical products to control pests and diseases, reduces the negative impact of climate and improves product quality. During the crisis caused by Covid-19, there was an increase in exports of fruits and vegetables produced in greenhouses from Guatemala. However, this information has not been considered in the 2021 revision of the Plan. The objective of this study is to evaluate the information available on Greenhouse Agricultural Production and its integration into the Sustainability Indicators for Guatemala. This study was carried out in four phases: 1. Analysis of the Goals established for SDG 2 and the indicators included in the K'atún Plan. 2. Analysis of Environmental, Social and Economic Indicator Models. 3. Definition of territorial levels in 2 geographic scales: Departments and Municipalities. 4. Diagnosis of the available data on technological agricultural production with emphasis on Greenhouses at the 2 geographical scales. A summary of the results is presented for each phase and finally some recommendations for future research are added. The main contribution of this work is to improve the available data that allow the incorporation of some agricultural technology indicators in the established goals, to evaluate their impact on Food Security and Nutrition, Employment and Investment, Poverty, the use of Water and Natural Resources, and to provide a methodology applicable to other production models and other geographical areas.

Keywords: greenhouses, protected agriculture, sustainable indicators, Guatemala, sustainability, SDG

Procedia PDF Downloads 70
20570 Examining the Relationship between Family Functioning and Perceived Self-Efficacy

Authors: Fenni Sim

Abstract:

Objectives: The purpose of the study is to examine the relationship between family functioning and level of self-efficacy: how family functioning can potentially affect self-efficacy which will eventually lead to better clinical outcomes. The hypothesis was ‘Patients on haemodialysis with perceived higher family functioning are more likely to have higher perceived level of self-efficacy’. Methods: The study was conducted with a mixed methodology of quantitative and qualitative data collection of survey and semi-structured interview respectively. The General Self-Efficacy scale and SCORE-15 were self-administered by participants. The data will be analysed with correlation analysis method using Microsoft Excel. 79 patients were recruited for the study through random sampling. 6 participants whose results did not reflect the hypothesis were then recruited for the qualitative study. Interpretive phemenological analysis was then used to analyse the qualitative data. Findings: The hypothesis was accepted that higher family functioning leads to higher perceived self-efficacy. The correlation coefficient of -0.21 suggested a mild correlation between the two variables. However, only 4.6% of the variation in perceived self-efficacy is accounted by the variation in family functioning. The qualitative study extrapolated three themes that might explain the variations in the outliers: (1) level of physical functioning affects perceived self-efficacy, (2) instrumental support from family influenced perceived level of family functioning, and self-efficacy, (3) acceptance of illness reflects higher level of self-efficacy. Conclusion: While family functioning does have an impact on perceived self-efficacy, there are many intrapersonal and physical factors that may affect self-efficacy. The concepts of family functioning and self-efficacy are more appropriately seen as complementing each other to help a patient in managing his illness. Healthcare social workers can look at how family functioning is supporting the individual needs of patients with different trajectory of ESRD and the support we can provide to improve one’s self-efficacy.

Keywords: chronic kidney disease, coping of illness, family functioning, self efficacy

Procedia PDF Downloads 158
20569 A Design of Elliptic Curve Cryptography Processor based on SM2 over GF(p)

Authors: Shiji Hu, Lei Li, Wanting Zhou, DaoHong Yang

Abstract:

The data encryption, is the foundation of today’s communication. On this basis, how to improve the speed of data encryption and decryption is always a problem that scholars work for. In this paper, we proposed an elliptic curve crypto processor architecture based on SM2 prime field. In terms of hardware implementation, we optimized the algorithms in different stages of the structure. In finite field modulo operation, we proposed an optimized improvement of Karatsuba-Ofman multiplication algorithm, and shorten the critical path through pipeline structure in the algorithm implementation. Based on SM2 recommended prime field, a fast modular reduction algorithm is used to reduce 512-bit wide data obtained from the multiplication unit. The radix-4 extended Euclidean algorithm was used to realize the conversion between affine coordinate system and Jacobi projective coordinate system. In the parallel scheduling of point operations on elliptic curves, we proposed a three-level parallel structure of point addition and point double based on the Jacobian projective coordinate system. Combined with the scalar multiplication algorithm, we added mutual pre-operation to the point addition and double point operation to improve the efficiency of the scalar point multiplication. The proposed ECC hardware architecture was verified and implemented on Xilinx Virtex-7 and ZYNQ-7 platforms, and each 256-bit scalar multiplication operation took 0.275ms. The performance for handling scalar multiplication is 32 times that of CPU(dual-core ARM Cortex-A9).

Keywords: Elliptic curve cryptosystems, SM2, modular multiplication, point multiplication.

Procedia PDF Downloads 78
20568 Transformation of Positron Emission Tomography Raw Data into Images for Classification Using Convolutional Neural Network

Authors: Paweł Konieczka, Lech Raczyński, Wojciech Wiślicki, Oleksandr Fedoruk, Konrad Klimaszewski, Przemysław Kopka, Wojciech Krzemień, Roman Shopa, Jakub Baran, Aurélien Coussat, Neha Chug, Catalina Curceanu, Eryk Czerwiński, Meysam Dadgar, Kamil Dulski, Aleksander Gajos, Beatrix C. Hiesmayr, Krzysztof Kacprzak, łukasz Kapłon, Grzegorz Korcyl, Tomasz Kozik, Deepak Kumar, Szymon Niedźwiecki, Dominik Panek, Szymon Parzych, Elena Pérez Del Río, Sushil Sharma, Shivani Shivani, Magdalena Skurzok, Ewa łucja Stępień, Faranak Tayefi, Paweł Moskal

Abstract:

This paper develops the transformation of non-image data into 2-dimensional matrices, as a preparation stage for classification based on convolutional neural networks (CNNs). In positron emission tomography (PET) studies, CNN may be applied directly to the reconstructed distribution of radioactive tracers injected into the patient's body, as a pattern recognition tool. Nonetheless, much PET data still exists in non-image format and this fact opens a question on whether they can be used for training CNN. In this contribution, the main focus of this paper is the problem of processing vectors with a small number of features in comparison to the number of pixels in the output images. The proposed methodology was applied to the classification of PET coincidence events.

Keywords: convolutional neural network, kernel principal component analysis, medical imaging, positron emission tomography

Procedia PDF Downloads 119
20567 Evaluation of Actual Nutrition Patients of Osteoporosis

Authors: Aigul Abduldayeva, Gulnar Tuleshova

Abstract:

Osteoporosis (OP) is a major socio-economic problem and is a major cause of disability, reduced quality of life and premature death of elderly people. In Astana, the study involved 93 respondents, of whom 17 were men (18.3%), and 76 were women (81.7%). Age distribution of the respondents is as follows: 40-59 (66.7%), 60-75 (29.0%), 75-90 (4.3%). In the city of Astana general breach of bone mass (CCM) was determined in 83.8% (nationwide figure - RRP - 79.0%) of the patients, and normal levels of ultrasound densitometry were detected in 16.1% (RRP 21.0%) of the patients. OP was diagnosed in 20.4% of people over 40 (RRP for citizens is 19.0%), 25.4% in the group older than 50 (23.4% PIU), 22,6% in the group older than 60 (RRP 32.6%), 25.0% in the group older than 70 (47.6% of RRP). OPN was detected in 63.4% (RRP 59.6%) of the surveyed population. These data indicate that, there is no sharp difference between Astana and other cities in the country regarding the incidence of OP, that is, the situation with the OP is not aggravated by any regional characteristics. In the distribution of respondents by clusters it was found that 80.0% of the respondents with CCM were in the "best urban cluster", 93.8% were in "average urban cluster", and 77.4% were in a "poor urban cluster". There is a high rate construction of new buildings in Astana, presumably, that the new settlers inhabit the outskirts of the city, and very difficult to trace the socio-economic differences there. Based on these data the following conclusions can be made: 1. According to the ultrasound densitometry of the calcaneus the prevalence rate of NCM among the residents of Astana is 83.3%, OP - 20.4%, which generally coincides with data elsewhere in the country. 2. The urban population of Astana is under a high degree of risk for low energetic fracture, 46.2% of the population had medium and high risks of fracture, while the nationwide index is 26.7%. 3. In the development of CCM residents of Akmola region play a significant role gender, age, ethnic factors. According to the ultrasound densitometry women are more prone to Astana OP - 22.4% of respondents than men - 11.8% of respondents.

Keywords: nutrition, osteoporosis, elderly, urban population

Procedia PDF Downloads 458
20566 Investigation of the Relationship between Government Expenditure and Country’s Economic Development in the Context of Sustainable Development

Authors: Lina Sinevičienė

Abstract:

Arising problems of countries’ public finances, social and demographic changes motivate scientific and policy debates on public spending size, structure and efficiency in order to meet the changing needs of society and business. The concept of sustainable development poses new challenges for scientists and policy-makers in the field of public finance. This paper focuses on the investigation of the relationship between government expenditure and country’s economic development in the context of sustainable development. Empirical analysis focuses on the data of the European Union (except Croatia and Luxemburg) countries. The study covers 2003 – 2012 years, using annual cross-sectional data. Summarizing the research results, it can be stated that governments should pay more attention to the needs that ensure sustainable development in the long-run when formulating public expenditure policy, particularly in the field of environment protection.

Keywords: economic development, economic growth, government expenditure, sustainable development

Procedia PDF Downloads 278
20565 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping

Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert

Abstract:

Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.

Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy

Procedia PDF Downloads 123
20564 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes

Authors: Nadarajah I. Ramesh

Abstract:

Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.

Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model

Procedia PDF Downloads 261
20563 Review of Life-Cycle Analysis Applications on Sustainable Building and Construction Sector as Decision Support Tools

Authors: Liying Li, Han Guo

Abstract:

Considering the environmental issues generated by the building sector for its energy consumption, solid waste generation, water use, land use, and global greenhouse gas (GHG) emissions, this review pointed out to LCA as a decision-support tool to substantially improve the sustainability in the building and construction industry. The comprehensiveness and simplicity of LCA make it one of the most promising decision support tools for the sustainable design and construction of future buildings. This paper contains a comprehensive review of existing studies related to LCAs with a focus on their advantages and limitations when applied in the building sector. The aim of this paper is to enhance the understanding of a building life-cycle analysis, thus promoting its application for effective, sustainable building design and construction in the future. Comparisons and discussions are carried out between four categories of LCA methods: building material and component combinations (BMCC) vs. the whole process of construction (WPC) LCA,attributional vs. consequential LCA, process-based LCA vs. input-output (I-O) LCA, traditional vs. hybrid LCA. Classical case studies are presented, which illustrate the effectiveness of LCA as a tool to support the decisions of practitioners in the design and construction of sustainable buildings. (i) BMCC and WPC categories of LCA researches tend to overlap with each other, as majority WPC LCAs are actually developed based on a bottom-up approach BMCC LCAs use. (ii) When considering the influence of social and economic factors outside the proposed system by research, a consequential LCA could provide a more reliable result than an attributional LCA. (iii) I-O LCA is complementary to process-based LCA in order to address the social and economic problems generated by building projects. (iv) Hybrid LCA provides a more superior dynamic perspective than a traditional LCA that is criticized for its static view of the changing processes within the building’s life cycle. LCAs are still being developed to overcome their limitations and data shortage (especially data on the developing world), and the unification of LCA methods and data can make the results of building LCA more comparable and consistent across different studies or even countries.

Keywords: decision support tool, life-cycle analysis, LCA tools and data, sustainable building design

Procedia PDF Downloads 103
20562 Reinforced Concrete Bridge Deck Condition Assessment Methods Using Ground Penetrating Radar and Infrared Thermography

Authors: Nicole M. Martino

Abstract:

Reinforced concrete bridge deck condition assessments primarily use visual inspection methods, where an inspector looks for and records locations of cracks, potholes, efflorescence and other signs of probable deterioration. Sounding is another technique used to diagnose the condition of a bridge deck, however this method listens for damage within the subsurface as the surface is struck with a hammer or chain. Even though extensive procedures are in place for using these inspection techniques, neither one provides the inspector with a comprehensive understanding of the internal condition of a bridge deck – the location where damage originates from.  In order to make accurate estimates of repair locations and quantities, in addition to allocating the necessary funding, a total understanding of the deck’s deteriorated state is key. The research presented in this paper collected infrared thermography and ground penetrating radar data from reinforced concrete bridge decks without an asphalt overlay. These decks were of various ages and their condition varied from brand new, to in need of replacement. The goals of this work were to first verify that these nondestructive evaluation methods could identify similar areas of healthy and damaged concrete, and then to see if combining the results of both methods would provide a higher confidence than if the condition assessment was completed using only one method. The results from each method were presented as plan view color contour plots. The results from one of the decks assessed as a part of this research, including these plan view plots, are presented in this paper. Furthermore, in order to answer the interest of transportation agencies throughout the United States, this research developed a step-by-step guide which demonstrates how to collect and assess a bridge deck using these nondestructive evaluation methods. This guide addresses setup procedures on the deck during the day of data collection, system setups and settings for different bridge decks, data post-processing for each method, and data visualization and quantification.

Keywords: bridge deck deterioration, ground penetrating radar, infrared thermography, NDT of bridge decks

Procedia PDF Downloads 140
20561 Leveraging Learning Analytics to Inform Learning Design in Higher Education

Authors: Mingming Jiang

Abstract:

This literature review aims to offer an overview of existing research on learning analytics and learning design, the alignment between the two, and how learning analytics has been leveraged to inform learning design in higher education. Current research suggests a need to create more alignment and integration between learning analytics and learning design in order to not only ground learning analytics on learning sciences but also enable data-driven decisions in learning design to improve learning outcomes. In addition, multiple conceptual frameworks have been proposed to enhance the synergy and alignment between learning analytics and learning design. Future research should explore this synergy further in the unique context of higher education, identifying learning analytics metrics in higher education that can offer insight into learning processes, evaluating the effect of learning analytics outcomes on learning design decision-making in higher education, and designing learning environments in higher education that make the capturing and deployment of learning analytics outcomes more efficient.

Keywords: learning analytics, learning design, big data in higher education, online learning environments

Procedia PDF Downloads 142
20560 Identification of Soft Faults in Branched Wire Networks by Distributed Reflectometry and Multi-Objective Genetic Algorithm

Authors: Soumaya Sallem, Marc Olivas

Abstract:

This contribution presents a method for detecting, locating, and characterizing soft faults in a complex wired network. The proposed method is based on multi-carrier reflectometry MCTDR (Multi-Carrier Time Domain Reflectometry) combined with a multi-objective genetic algorithm. In order to ensure complete network coverage and eliminate diagnosis ambiguities, the MCTDR test signal is injected at several points on the network, and the data is merged between different reflectometers (sensors) distributed on the network. An adapted multi-objective genetic algorithm is used to merge data in order to obtain more accurate faults location and characterization. The proposed method performances are evaluated from numerical and experimental results.

Keywords: wired network, reflectometry, network distributed diagnosis, multi-objective genetic algorithm

Procedia PDF Downloads 178
20559 The Linear Combination of Kernels in the Estimation of the Cumulative Distribution Functions

Authors: Abdel-Razzaq Mugdadi, Ruqayyah Sani

Abstract:

The Kernel Distribution Function Estimator (KDFE) method is the most popular method for nonparametric estimation of the cumulative distribution function. The kernel and the bandwidth are the most important components of this estimator. In this investigation, we replace the kernel in the KDFE with a linear combination of kernels to obtain a new estimator based on the linear combination of kernels, the mean integrated squared error (MISE), asymptotic mean integrated squared error (AMISE) and the asymptotically optimal bandwidth for the new estimator are derived. We propose a new data-based method to select the bandwidth for the new estimator. The new technique is based on the Plug-in technique in density estimation. We evaluate the new estimator and the new technique using simulations and real-life data.

Keywords: estimation, bandwidth, mean square error, cumulative distribution function

Procedia PDF Downloads 564
20558 User Experience in Relation to Eye Tracking Behaviour in VR Gallery

Authors: Veslava Osinska, Adam Szalach, Dominik Piotrowski

Abstract:

Contemporary VR technologies allow users to explore virtual 3D spaces where they can work, socialize, learn, and play. User's interaction with GUI and the pictures displayed implicate perceptual and also cognitive processes which can be monitored due to neuroadaptive technologies. These modalities provide valuable information about the users' intentions, situational interpretations, and emotional states, to adapt an application or interface accordingly. Virtual galleries outfitted by specialized assets have been designed using the Unity engine BITSCOPE project in the frame of CHIST-ERA IV program. Users interaction with gallery objects implies the questions about his/her visual interests in art works and styles. Moreover, an attention, curiosity, and other emotional states are possible to be monitored and analyzed. Natural gaze behavior data and eye position were recorded by built-in eye-tracking module within HTC Vive headset gogle for VR. Eye gaze results are grouped due to various users’ behavior schemes and the appropriate perpetual-cognitive styles are recognized. Parallelly usability tests and surveys were adapted to identify the basic features of a user-centered interface for the virtual environments across most of the timeline of the project. A total of sixty participants were selected from the distinct faculties of University and secondary schools. Users’ primary knowledge about art and was evaluated during pretest and this way the level of art sensitivity was described. Data were collected during two months. Each participant gave written informed consent before participation. In data analysis reducing the high-dimensional data into a relatively low-dimensional subspace ta non linear algorithms were used such as multidimensional scaling and novel technique technique t-Stochastic Neighbor Embedding. This way it can classify digital art objects by multi modal time characteristics of eye tracking measures and reveal signatures describing selected artworks. Current research establishes the optimal place on aesthetic-utility scale because contemporary interfaces of most applications require to be designed in both functional and aesthetical ways. The study concerns also an analysis of visual experience for subsamples of visitors, differentiated, e.g., in terms of frequency of museum visits, cultural interests. Eye tracking data may also show how to better allocate artefacts and paintings or increase their visibility when possible.

Keywords: eye tracking, VR, UX, visual art, virtual gallery, visual communication

Procedia PDF Downloads 22
20557 The Impact of E-Learning on the Performance of History Learners in Eswatini General Certificate of Secondary Education

Authors: Joseph Osodo, Motsa Thobekani Phila

Abstract:

The study investigated the impact of e-learning on the performance of history learners in Eswatini general certificate of secondary education in the Manzini region of Eswatini. The study was guided by the theory of connectivism. The study had three objectives which were to find out the significance of e-learning during the COVID-19 era in learning History subject; challenges faced by history teachers’ and learners’ in e-learning; and how the challenges were mitigated. The study used a qualitative research approach and descriptive research design. Purposive sampling was used to select eight History teachers and eight History learners from four secondary schools in the Manzini region. Data were collected using face to face interviews. The collected data were analyzed and presented in thematically. The findings showed that history teachers had good knowledge on what e-learning was, while students had little understanding of e-learning. Some of the forms of e-learning that were used during the pandemic in teaching history in secondary schools included TV, radio, computer, projectors, and social media especially WhatsApp. E-learning enabled the continuity of teaching and learning of history subject. The use of e-learning through the social media was more convenient to the teacher and the learners. It was concluded that in some secondary school in the Manzini region, history teacher and learners encountered challenges such as lack of finances to purchase e-learning gadgets and data bundles, lack of skills as well as access to the Internet. It was recommended that History teachers should create more time to offer additional learning support to students whose performance was affected by the COVID-19 pandemic effects.

Keywords: e-learning, performance, COVID-19, history, connectivism

Procedia PDF Downloads 61
20556 Using TRACE, PARCS, and SNAP Codes to Analyze the Load Rejection Transient of ABWR

Authors: J. R. Wang, H. C. Chang, A. L. Ho, J. H. Yang, S. W. Chen, C. Shih

Abstract:

The purpose of the study is to analyze the load rejection transient of ABWR by using TRACE, PARCS, and SNAP codes. This study has some steps. First, using TRACE, PARCS, and SNAP codes establish the model of ABWR. Second, the key parameters are identified to refine the TRACE/PARCS/SNAP model further in the frame of a steady state analysis. Third, the TRACE/PARCS/SNAP model is used to perform the load rejection transient analysis. Finally, the FSAR data are used to compare with the analysis results. The results of TRACE/PARCS are consistent with the FSAR data for the important parameters. It indicates that the TRACE/PARCS/SNAP model of ABWR has a good accuracy in the load rejection transient.

Keywords: ABWR, TRACE, PARCS, SNAP

Procedia PDF Downloads 182
20555 Bluetooth Communication Protocol Study for Multi-Sensor Applications

Authors: Joao Garretto, R. J. Yarwood, Vamsi Borra, Frank Li

Abstract:

Bluetooth Low Energy (BLE) has emerged as one of the main wireless communication technologies used in low-power electronics, such as wearables, beacons, and Internet of Things (IoT) devices. BLE’s energy efficiency characteristic, smart mobiles interoperability, and Over the Air (OTA) capabilities are essential features for ultralow-power devices, which are usually designed with size and cost constraints. Most current research regarding the power analysis of BLE devices focuses on the theoretical aspects of the advertising and scanning cycles, with most results being presented in the form of mathematical models and computer software simulations. Such computer modeling and simulations are important for the comprehension of the technology, but hardware measurement is essential for the understanding of how BLE devices behave in real operation. In addition, recent literature focuses mostly on the BLE technology, leaving possible applications and its analysis out of scope. In this paper, a coin cell battery-powered BLE Data Acquisition Device, with a 4-in-1 sensor and one accelerometer, is proposed and evaluated with respect to its Power Consumption. First, evaluations of the device in advertising mode with the sensors turned off completely, followed by the power analysis when each of the sensors is individually turned on and data is being transmitted, and concluding with the power consumption evaluation when both sensors are on and respectively broadcasting the data to a mobile phone. The results presented in this paper are real-time measurements of the electrical current consumption of the BLE device, where the energy levels that are demonstrated are matched to the BLE behavior and sensor activity.

Keywords: bluetooth low energy, power analysis, BLE advertising cycle, wireless sensor node

Procedia PDF Downloads 75
20554 The Effectiveness of the Management of Zakat on Dompet Dhuafa in Makassar

Authors: Nurul Qalbi Awaliyah, Rosmala Rauf, Indrawan, Suherman

Abstract:

Zakat is a certain amount of property which shall be issued by Moslems and given to groups who deserve it (the poor and so on) according to the conditions set by the sharia. This research aims to know the effectiveness of the management of zakat on Dompet Dhuafa in Makasar. The type of research used is quantitative research with descriptive research method. Data collection was done through the dissemination of Likert scale and measurement of the now. The samples were analyzed by as much as 68 and analyzed using SPSS 18.0. The results of the analysis of data obtained at the level of effectiveness of management of zakat in Makassar from all aspects based on SPSS has a mean 140.04 median, minimum, 141 122, and a maximum of 164. The value of all the indicators of assessment of the effectiveness of the management of zakat on Dompet Dhuafa in Makassar has an average score of (M) of 112.5 and standard deviation (SD) of 37.5. The results show that the level of effectiveness of management of zakat in Makassar city is in the category of effective percentage 85,3%. Based on the results it can be concluded that management of zakat on Dompet Dhuafa in Makassar city has been implemented effectively.

Keywords: Dompet Duafa, effectiveness, management, Zakat

Procedia PDF Downloads 253
20553 Combining ASTER Thermal Data and Spatial-Based Insolation Model for Identification of Geothermal Active Areas

Authors: Khalid Hussein, Waleed Abdalati, Pakorn Petchprayoon, Khaula Alkaabi

Abstract:

In this study, we integrated ASTER thermal data with an area-based spatial insolation model to identify and delineate geothermally active areas in Yellowstone National Park (YNP). Two pairs of L1B ASTER day- and nighttime scenes were used to calculate land surface temperature. We employed the Emissivity Normalization Algorithm which separates temperature from emissivity to calculate surface temperature. We calculated the incoming solar radiation for the area covered by each of the four ASTER scenes using an insolation model and used this information to compute temperature due to solar radiation. We then identified the statistical thermal anomalies using land surface temperature and the residuals calculated from modeled temperatures and ASTER-derived surface temperatures. Areas that had temperatures or temperature residuals greater than 2σ and between 1σ and 2σ were considered ASTER-modeled thermal anomalies. The areas identified as thermal anomalies were in strong agreement with the thermal areas obtained from the YNP GIS database. Also the YNP hot springs and geysers were located within areas identified as anomalous thermal areas. The consistency between our results and known geothermally active areas indicate that thermal remote sensing data, integrated with a spatial-based insolation model, provides an effective means for identifying and locating areas of geothermal activities over large areas and rough terrain.

Keywords: thermal remote sensing, insolation model, land surface temperature, geothermal anomalies

Procedia PDF Downloads 353
20552 Distributed Cost-Based Scheduling in Cloud Computing Environment

Authors: Rupali, Anil Kumar Jaiswal

Abstract:

Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc.  Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively.  Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.

Keywords: physical machines, virtual machines, support for repetition, self-healing, highly scalable programming model

Procedia PDF Downloads 155
20551 The Spatial Pattern of Economic Rents of an Airport Development Area: Lessons Learned from the Suvarnabhumi International Airport, Thailand

Authors: C. Bejrananda, Y. Lee, T. Khamkaew

Abstract:

With the rise of the importance of air transportation in the 21st century, the role of economics in airport planning and decision-making has become more important to the urban structure and land value around it. Therefore, this research aims to examine the relationship between an airport and its impacts on the distribution of urban land uses and land values by applying the Alonso’s bid rent model. The New Bangkok International Airport (Suvarnabhumi International Airport) was taken as a case study. The analysis was made over three different time periods of airport development (after the airport site was proposed, during airport construction, and after the opening of the airport). The statistical results confirm that Alonso’s model can be used to explain the impacts of the new airport only for the northeast quadrant of the airport, while proximity to the airport showed the inverse relationship with the land value of all six types of land use activities through three periods of time. It indicates that the land value for commercial land use is the most sensitive to the location of the airport or has the strongest requirement for accessibility to the airport compared to the residential and manufacturing land use. Also, the bid-rent gradients of the six types of land use activities have declined dramatically through the three time periods because of the Asian Financial Crisis in 1997. Therefore, the lesson learned from this research concerns about the reliability of the data used. The major concern involves the use of different areal units for assessing land value for different time periods between zone block (1995) and grid block (2002, 2009). As a result, this affect the investigation of the overall trends of land value assessment, which are not readily apparent. In addition, the next concern is the availability of the historical data. With the lack of collecting historical data for land value assessment by the government, some of data of land values and aerial photos are not available to cover the entire study area. Finally, the different formats of using aerial photos between hard-copy (1995) and digital photo (2002, 2009) made difficult for measuring distances. Therefore, these problems also affect the accuracy of the results of the statistical analyses.

Keywords: airport development area, economic rents, spatial pattern, suvarnabhumi international airport

Procedia PDF Downloads 266
20550 Applying GIS Geographic Weighted Regression Analysis to Assess Local Factors Impeding Smallholder Farmers from Participating in Agribusiness Markets: A Case Study of Vihiga County, Western Kenya

Authors: Mwehe Mathenge, Ben G. J. S. Sonneveld, Jacqueline E. W. Broerse

Abstract:

Smallholder farmers are important drivers of agriculture productivity, food security, and poverty reduction in Sub-Saharan Africa. However, they are faced with myriad challenges in their efforts at participating in agribusiness markets. How the geographic explicit factors existing at the local level interact to impede smallholder farmers' decision to participates (or not) in agribusiness markets is not well understood. Deconstructing the spatial complexity of the local environment could provide a deeper insight into how geographically explicit determinants promote or impede resource-poor smallholder farmers from participating in agribusiness. This paper’s objective was to identify, map, and analyze local spatial autocorrelation in factors that impede poor smallholders from participating in agribusiness markets. Data were collected using geocoded researcher-administered survey questionnaires from 392 households in Western Kenya. Three spatial statistics methods in geographic information system (GIS) were used to analyze data -Global Moran’s I, Cluster and Outliers Analysis (Anselin Local Moran’s I), and geographically weighted regression. The results of Global Moran’s I reveal the presence of spatial patterns in the dataset that was not caused by spatial randomness of data. Subsequently, Anselin Local Moran’s I result identified spatially and statistically significant local spatial clustering (hot spots and cold spots) in factors hindering smallholder participation. Finally, the geographically weighted regression results unearthed those specific geographic explicit factors impeding market participation in the study area. The results confirm that geographically explicit factors are indispensable in influencing the smallholder farming decisions, and policymakers should take cognizance of them. Additionally, this research demonstrated how geospatial explicit analysis conducted at the local level, using geographically disaggregated data, could help in identifying households and localities where the most impoverished and resource-poor smallholder households reside. In designing spatially targeted interventions, policymakers could benefit from geospatial analysis methods in understanding complex geographic factors and processes that interact to influence smallholder farmers' decision-making processes and choices.

Keywords: agribusiness markets, GIS, smallholder farmers, spatial statistics, disaggregated spatial data

Procedia PDF Downloads 128
20549 Ecosystem Modeling along the Western Bay of Bengal

Authors: A. D. Rao, Sachiko Mohanty, R. Gayathri, V. Ranga Rao

Abstract:

Modeling on coupled physical and biogeochemical processes of coastal waters is vital to identify the primary production status under different natural and anthropogenic conditions. About 7, 500 km length of Indian coastline is occupied with number of semi enclosed coastal bodies such as estuaries, inlets, bays, lagoons, and other near shore, offshore shelf waters, etc. This coastline is also rich in wide varieties of ecosystem flora and fauna. Directly/indirectly extensive domestic and industrial sewage enter into these coastal water bodies affecting the ecosystem character and create environment problems such as water quality degradation, hypoxia, anoxia, harmful algal blooms, etc. lead to decline in fishery and other related biological production. The present study is focused on the southeast coast of India, starting from Pulicat to Gulf of Mannar, which is rich in marine diversity such as lagoon, mangrove and coral ecosystem. Three dimensional Massachusetts Institute of Technology general circulation model (MITgcm) along with Darwin biogeochemical module is configured for the western Bay of Bengal (BoB) to study the biogeochemistry over this region. The biogeochemical module resolves the cycling of carbon, phosphorous, nitrogen, silica, iron and oxygen through inorganic, living, dissolved and particulate organic phases. The model domain extends from 4°N-16.5°N and 77°E-86°E with a horizontal resolution of 1 km. The bathymetry is derived from General Bathymetric Chart of the Oceans (GEBCO), which has a resolution of 30 sec. The model is initialized by using the temperature, salinity filed from the World Ocean Atlas (WOA2013) of National Oceanographic Data Centre with a resolution of 0.25°. The model is forced by the surface wind stress from ASCAT and the photosynthetically active radiation from the MODIS-Aqua satellite. Seasonal climatology of nutrients (phosphate, nitrate and silicate) for the southwest BoB region are prepared using available National Institute of Oceanography (NIO) in-situ data sets and compared with the WOA2013 seasonal climatology data. The model simulations with the two different initial conditions viz., WOA2013 and the generated NIO climatology, showed evident changes in the concentration and the evolution of the nutrients in the study region. It is observed that the availability of nutrients is more in NIO data compared to WOA in the model domain. The model simulated primary productivity is compared with the spatially distributed satellite derived chlorophyll data and at various locations with the in-situ data. The seasonal variability of the model simulated primary productivity is also studied.

Keywords: Bay of Bengal, Massachusetts Institute of Technology general circulation model, MITgcm, biogeochemistry, primary productivity

Procedia PDF Downloads 125
20548 Volatility Spillover and Hedging Effectiveness between Gold and Stock Markets: Evidence for BRICS Countries

Authors: Walid Chkili

Abstract:

This paper investigates the dynamic relationship between gold and stock markets using data for BRICS counties. For this purpose, we estimate three multivariate GARCH models (namely CCC, DCC and BEKK) for weekly stock and gold data. Our main objective is to examine time variations in conditional correlations between the two assets and to check the effectiveness use of gold as a hedge for equity markets. Empirical results reveal that dynamic conditional correlations switch between positive and negative values over the period under study. This correlation is negative during the major financial crises suggesting that gold can act as a safe haven during the major stress period of stock markets. We also evaluate the implications for portfolio diversification and hedging effectiveness for the pair gold/stock. Our findings suggest that adding gold in the stock portfolio enhance its risk-adjusted return.

Keywords: gold, financial markets, hedge, multivariate GARCH

Procedia PDF Downloads 456
20547 Energy Efficient Routing Protocol with Ad Hoc On-Demand Distance Vector for MANET

Authors: K. Thamizhmaran, Akshaya Devi Arivazhagan, M. Anitha

Abstract:

On the case of most important systematic issue that must need to be solved in means of implementing a data transmission algorithm on the source of Mobile adhoc networks (MANETs). That is, how to save mobile nodes energy on meeting the requirements of applications or users as the mobile nodes are with battery limited. On while satisfying the energy saving requirement, hence it is also necessary of need to achieve the quality of service. In case of emergency work, it is necessary to deliver the data on mean time. Achieving quality of service in MANETs is also important on while. In order to achieve this requirement, Hence, we further implement the Energy-Aware routing protocol for system of Mobile adhoc networks were it being proposed, that on which saves the energy as on every node by means of efficiently selecting the mode of energy efficient path in the routing process by means of Enhanced AODV routing protocol.

Keywords: Ad-Hoc networks, MANET, routing, AODV, EAODV

Procedia PDF Downloads 351