Search results for: scientific data mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27127

Search results for: scientific data mining

22117 The Promotion of Andalusian Heritage through Tourism in the Medina of Marrakech

Authors: Nour Eddine Nachouane, Aicha Knidiri

Abstract:

The Hispano-Moorish art was born in 786 when Abd ar-Rahman built the first mosque in Cordoba. It is a still-living art in the trades of the big Moroccan cities. Everyone agrees that the different artistic forms of Arab-Muslim art find their full development in traditional Moroccan architecture, and this heritage allows artists and artisans to create magnificent masterpieces. Marrakech, by way of example, constitutes a symbolic city, which represents the reflection of a rich history of this art carried by a long artisanal tradition that is still living nowadays. Despite its ratification by UNESCO as intangible cultural heritage, and beyond official speeches, several of those craft trades are endangered, and with them the whole history of millennial savoir-faire. From the empirical study of the old historic center, 'the medina' of Marrakech, we explore in this article the opportunity offered by the tourism industry in order to protect these craft trades. We question artisans on the evolution of the sector and the challenges of the transmission of this heritage. We evoke the case of Spanish cities like Granada in a comparative reflection on the strategies and perceptions of the public administrations of a part, and, on the other hand, on the shared experience of artisans and tourists. In an interdisciplinary approach mixing anthropology, history, sociology, and even geography, we question the capacity of heritage processes to mobilize and involve a set of actors and activate a trajectory for the safeguarding of Andalusian arts and techniques. The basic assumption of this research is that the promotion of traditional craft trades through tourism and based on good scientific knowledge can present an original offer to cope with globalization and guarantee the transmission of that savoir-faire to new generations. Research in the field of Islamic arts does not constitute a retreat into the nationalist identity or a fixation on the past but an opening towards cultural diversity, free from any standardization.

Keywords: heritage, art andalusi, handcraft, tourism

Procedia PDF Downloads 168
22116 The Formation of Thin Copper Films on Graphite Surface Using Magnetron Sputtering Method

Authors: Zydrunas Kavaliauskas, Aleksandras Iljinas, Liutauras Marcinauskas, Mindaugas Milieska, Vitas Valincius

Abstract:

The magnetron sputtering deposition method is often used to obtain thin film coatings. The main advantage of magnetron vaporization compared to other deposition methods is the high rate erosion of the cathode material (e.g., copper, aluminum, etc.) and the ability to operate under low-pressure conditions. The structure of the formed coatings depends on the working parameters of the magnetron deposition system, which is why it is possible to influence the properties of the growing film, such as morphology, crystal orientation, and dimensions, stresses, adhesion, etc. The properties of these coatings depend on the distance between the substrate and the magnetron surface, the vacuum depth, the gas used, etc. Using this deposition technology, substrates are most often placed near the anode. The magnetic trap of the magnetrons for localization of electrons in the cathode region is formed using a permanent magnet system that is on the side of the cathode. The scientific literature suggests that, after insertion of a small amount of copper into graphite, the electronic conductivity of graphite increase. The aim of this work is to create thin (up to 300 nm) layers on a graphite surface using a magnetron evaporation method, to investigate the formation peculiarities and microstructure of thin films, as well as the mechanism of copper diffusion into graphite inner layers at different thermal treatment temperatures. The electron scanning microscope was used to investigate the microrelief of the coating surface. The chemical composition is determined using the EDS method, which shows that, with an increase of the thermal treatment of the copper-carbon layer from 200 °C to 400 °C, the copper content is reduced from 8 to 4 % in atomic mass units. This is because the EDS method captures only the amount of copper on the graphite surface, while the temperature of the heat treatment increases part of the copper because of the diffusion processes penetrates into the inner layers of the graphite. The XRD method shows that the crystalline copper structure is not affected by thermal treatment.

Keywords: carbon, coatings, copper, magnetron sputtering

Procedia PDF Downloads 297
22115 Study of Morphological Changes of the River Ganga in Patna District, Bihar Using Remote Sensing and GIS Techniques

Authors: Bhawesh Kumar, A. P. Krishna

Abstract:

There are continuous changes upon earth’s surface by a variety of natural and anthropogenic agents cut, carry away and depositing of minerals from land. Running water has higher capacity of erosion than other geomorphologic agents. This research work has been carried out on Ganga River, whose channel is continuously changing under the influence of geomorphic agents and human activities in the surrounding regions. The main focus is to study morphological characteristics and sand dynamics of Ganga River with particular emphasis on bank lines and width changes using remote sensing and GIS techniques. The advance remote sensing data and topographical data were interpreted for obtaining 52 years of changes. For this, remote sensing data of different years (LANDSAT TM 1975, 1988, 1993, ETM 2005 and ETM 2012) and toposheet of SOI for the year 1960 were used as base maps for this study. Sinuosity ratio, braiding index and migratory activity index were also established. It was found to be 1.16 in 1975 and in 1988, 1993, 2005 and 2005 it was 1.09, 1.11, 1.1, 1.09 respectively. The analysis also shows that the minimum value found in 1960 was in reach 1 and maximum value is 4.8806 in 2012 found in reach 4 which suggests creation of number of islands in reach 4 for the year 2012. Migratory activity index (MAI), which is a standardized function of both length and time, was computed for the 8 representative reaches. MAI shows that maximum migration was in 1975-1988 in reach 6 and 7 and minimum migration was in 1993-2005. From the channel change analysis, it was found that the shifting of bank line was cyclic and the river Ganges showed a trend of southward maximum values. The advanced remote sensing data and topographical data helped in obtaining 52 years changes in the river due to various natural and manmade activities like flood, water velocity and excavation, removal of vegetation cover and fertile soil excavation for the various purposes of surrounding regions.

Keywords: braided index, migratory activity index (MAI), Ganga river, river morphology

Procedia PDF Downloads 353
22114 e-Learning Security: A Distributed Incident Response Generator

Authors: Bel G Raggad

Abstract:

An e-Learning setting is a distributed computing environment where information resources can be connected to any public network. Public networks are very unsecure which can compromise the reliability of an e-Learning environment. This study is only concerned with the intrusion detection aspect of e-Learning security and how incident responses are planned. The literature reported great advances in intrusion detection system (ids) but neglected to study an important ids weakness: suspected events are detected but an intrusion is not determined because it is not defined in ids databases. We propose an incident response generator (DIRG) that produces incident responses when the working ids system suspects an event that does not correspond to a known intrusion. Data involved in intrusion detection when ample uncertainty is present is often not suitable to formal statistical models including Bayesian. We instead adopt Dempster and Shafer theory to process intrusion data for the unknown event. The DIRG engine transforms data into a belief structure using incident scenarios deduced by the security administrator. Belief values associated with various incident scenarios are then derived and evaluated to choose the most appropriate scenario for which an automatic incident response is generated. This article provides a numerical example demonstrating the working of the DIRG system.

Keywords: decision support system, distributed computing, e-Learning security, incident response, intrusion detection, security risk, statefull inspection

Procedia PDF Downloads 442
22113 Measure-Valued Solutions to a Class of Nonlinear Parabolic Equations with Degenerate Coercivity and Singular Initial Data

Authors: Flavia Smarrazzo

Abstract:

Initial-boundary value problems for nonlinear parabolic equations having a Radon measure as initial data have been widely investigated, looking for solutions which for positive times take values in some function space. On the other hand, if the diffusivity degenerates too fast at infinity, it is well known that function-valued solutions may not exist, singularities may persist, and it looks very natural to consider solutions which, roughly speaking, for positive times describe an orbit in the space of the finite Radon measures. In this general framework, our purpose is to introduce a concept of measure-valued solution which is consistent with respect to regularizing and smoothing approximations, in order to develop an existence theory which does not depend neither on the level of degeneracy of diffusivity at infinity nor on the choice of the initial measures. In more detail, we prove existence of suitably defined measure-valued solutions to the homogeneous Dirichlet initial-boundary value problem for a class of nonlinear parabolic equations without strong coerciveness. Moreover, we also discuss some qualitative properties of the constructed solutions concerning the evolution of their singular part, including conditions (depending both on the initial data and on the strength of degeneracy) under which the constructed solutions are in fact unction-valued or not.

Keywords: degenerate parabolic equations, measure-valued solutions, Radon measures, young measures

Procedia PDF Downloads 287
22112 Gender Features of Left Ventricular Myocardial Remodeling and the Development of Chronic Heart Failure in Patients with Postinfarction Cardiosclerosis

Authors: G. Dadashova, A. Bakhshaliyev

Abstract:

Aim: Determine gender differences in the etiology and clinical outcomes, as well as in the remodeling of the left ventricle (LV) in patients with chronic heart failure (CHF), suffering from arterial hypertension (AH) and coronary heart disease (CHD). Material and methods: The study included 112 patients of both sexes; aged 45 to 60 years with postinfarction cardiosclerosis had functional class (FC) heart failure II-IV of NYHA which were examined on the basis of Azerbaijan Scientific Research Institute of Cardiology. The patients were divided into 2 groups: 1st c. 60 males, mean age 54,8 ± 3,3 years, and 2nd gr 52 women, mean age 55,8 ± 3,1 years. To assess cardiac hemodynamic all patients underwent echocardiography (B-M-modes) using ‘Vivid 3’. Thus on the basis of indicators such as the index of the relative thickness of the left ventricle wall and the index of left ventricular mass (LVMI) was identified the architectonic model of the left ventricle. Results: According to our research leading cause of heart failure in women is 50.5% of cases of hypertension, ischemic heart disease 23.7% (with 79.5% of the cases developed in patients with chronic heart failure who did not have a history of myocardial infarction). While in men is the undisputed leader of CHD, forming 78.3% of CHF (80.3% in men with CHF occurred after myocardial infarction). According to our research in women more often than men CHF develops a type of diastolic dysfunction (DD, and left ventricular ejection fraction remained unchanged. Since DD occurs in men at 65,8% vs. 76,4% of women when p < 0,05. In the group of women was more common prognostic neblagopryatnye remodeling - eccentric hypertrophy of the left ventricle: 68% vs. 54.5% among men (p < 0,05), concentric left ventricular hypertrophy: 21% in women vs 19,1% (p > 0,05 ). Conclusions: Patients with heart failure are a number of gender-specific: the prevalence of hypertension in women, and coronary heart disease in men. While in women with heart failure often recorded diastolic dysfunction and characterized by the development of prognostically unfavorable remodeling types: eccentric and concentric LV hypertrophy.

Keywords: chronic heart failure, arterial hypertension, remodeling, diastolic dysfunction, men, women, ischemic heart disease

Procedia PDF Downloads 353
22111 The Association between IFNAR2 and Dpp9 Genes Single Nucleotide Polymorphisms Frequency with COVID-19 Severity in Iranian Patients

Authors: Sima Parvizi Omran, Rezvan Tavakoli, Mahnaz Safari, Mohammadreza Aghasadeghi, Abolfazl Fateh, Pooneh Rahimi

Abstract:

Background: SARS-CoV-2, a single-stranded RNA betacoronavirus causes the global outbreak of coronavirus disease 2019 (COVID-19). Several clinical and scientific concerns are raised by this pandemic. Genetic factors can contribute to pathogenesis and disease susceptibility. There are single nucleotide polymorphisms (SNPs) in many of the genes in the immune system that affect the expression of specific genes or functions of some proteins related to immune responses against viral infections. In this study, we analyzed the impact of polymorphism in the interferon alpha and beta receptor subunit 2 (IFNAR2) and dipeptidyl peptidase 9 (Dpp9) genes and clinical parameters on the susceptibility and resistance to Coronavirus disease (COVID-19). Methods: A total of 330- SARS-CoV-2 positive patients (188 survivors and 142 nonsurvivors) were included in this study. All single-nucleotide polymorphisms (SNPs) on IFNAR2 (rs2236757) and Dpp9 (rs2109069) were genotyped by the polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method. Results: In survivor patients, the frequency of the favourable genotypes of IFNAR2 SNP (rs2236757 GC) was significantly higher than in nonsurvivor patients, and also Dpp9 (rs2109069 AT) genotypes were associated with the severity of COVID-19 infection. Conclusions: This study demonstrated that the severity of COVID- 19 patients was strongly associated with clinical parameters and unfavourable IFNAR2, Dpp9 SNP genotypes. In order to establish the relationship between host genetic factors and the severity of COVID-19 infection, further studies are needed in multiple parts of the world.

Keywords: SARS-CoV-2, COVID-19, interferon alpha and beta receptor subunit 2, dipeptidyl peptidase 9, single-nucleotide polymorphisms

Procedia PDF Downloads 167
22110 Developing the P1-P7 Management and Analysis Software for Thai Child Evaluation (TCE) of Food and Nutrition Status

Authors: S. Damapong, C. Kingkeow, W. Kongnoo, P. Pattapokin, S. Pruenglamphu

Abstract:

As the presence of Thai children double burden malnutrition, we conducted a project to promote holistic age-appropriate nutrition for Thai children. Researchers developed P1-P7 computer software for managing and analyzing diverse types of collected data. The study objectives were: i) to use software to manage and analyze the collected data, ii) to evaluate the children nutritional status and their caretakers’ nutrition practice to create regulations for improving nutrition. Data were collected by means of questionnaires, called P1-P7. P1, P2 and P5 were for children and caretakers, and others were for institutions. The children nutritional status, height-for-age, weight-for-age, and weight-for-height standards were calculated using Thai child z-score references. Institution evaluations consisted of various standard regulations including the use of our software. The results showed that the software was used in 44 out of 118 communities (37.3%), 57 out of 240 child development centers and nurseries (23.8%), and 105 out of 152 schools (69.1%). No major problems have been reported with the software, although user efficiency can be increased further through additional training. As the result, the P1-P7 software was used to manage and analyze nutritional status, nutrition behavior, and environmental conditions, in order to conduct Thai Child Evaluation (TCE). The software was most widely used in schools. Some aspects of P1-P7’s questionnaires could be modified to increase ease of use and efficiency.

Keywords: P1-P7 software, Thai child evaluation, nutritional status, malnutrition

Procedia PDF Downloads 359
22109 Synthesis and Gas Transport Properties of Polynorbornene Dicarboximides Bearing Trifluoromethyl Isomer Moieties

Authors: Jorge A. Cruz-Morales, Joel Vargas, Arlette A. Santiago, Mikhail A. Tlenkopatchev

Abstract:

In industrial processes such as oil extraction and refining, products are handled or generated in the gas phase, which represents a challenge in terms of treatment and purification. During the past three decades, new scientific findings and technological advances in separation based on the use of membranes have led to simpler and more efficient gas separation processes, optimizing the use of energy and generating less pollution. This work reports the synthesis and ring-opening metathesis polymerization (ROMP) of new structural isomers based on norbornene dicarboximides bearing trifluoromethyl moieties, specifically N-2-trifluoromethylphenyl-exo,endo-norbornene-5,6-dicarboximide (2a) and N-3-trifluoromethylphenyl-exo,endo-norbornene-5,6-dicarboximide (2b), using tricyclohexylphosphine [1,3-bis(2,4,6-trimethylphenyl)-4,5-dihydroimidazol-2-ylidene][benzylidene] ruthenium dichloride (I), bis(tricyclohexylphosphine) benzylidene ruthenium (IV) dichloride (II), and bis(tricyclohexylphosphine) p-fluorophenylvinylidene ruthenium (II) dichloride (III). It was observed that the -CF3 moiety attached at the ortho position of the aromatic ring increases thermal and mechanical properties of the polymer, whereas meta substitution has the opposite effect. A comparative study of gas transportation in membranes, based on these fluorinated polynorbornenes, showed that -CF3 ortho substitution increases permeability of the polymer membrane as a consequence of the increase in both gas solubility and gas diffusion. In contrast, gas permeability coefficients of the meta-substituted polymer membrane are rather similar to those of that which is non-fluorinated; this can be attributed to a lower fractional free volume. The meta-substituted polymer membrane, besides showing the largest permselectivity coefficients of all the isomers studied here, was also found to have one of the largest permselectivity coefficients for separating H2/C3H6 into glassy polynorbornene dicarboximides.

Keywords: gas transport membranes, polynorbornene dicarboximide, ROMP, structural isomers

Procedia PDF Downloads 258
22108 Determination of the Factors Affecting Adjustment Levels of First Class Students at Elementary School

Authors: Sibel Yoleri

Abstract:

In this research it is aimed to determine the adjustment of students who attend the first class at elementary school to school in terms of several variables. The study group of the research consists of 286 students (131 female, 155 male) who continue attending the first class of elementary school in 2013-2014 academic year, in the city center of Uşak. In the research, ‘Personal Information Form’ and ‘Walker-Mcconnell Scale of Social Competence and School Adjustment’ have been used as data collection tools. In the analysis of data, the t-test has been applied in the independent groups to determine whether the sampling group students’ scores of school adjustment differ according to the sex variable or not. For the evaluation of data identified as not showing normal distribution, Mann Whitney U test has been applied for paired comparison, Kruskal Wallis H test has been used for multiple comparisons. In the research, all the statistical processes have been evaluated bidirectional and the level of significance has been accepted as .05. According to the results gathered from the research, a meaningful difference could not been identified in the level of students’ adjustment to school in terms of sex variable. At the end of the research, it is identified that the adjustment level of the students who have started school at the age of seven is higher than the ones who have started school at the age of five and the adjustment level of the students who have preschool education before the elementary school is higher than the ones who have not taken.

Keywords: starting school, preschool education, school adjustment, Walker-Mcconnell Scale

Procedia PDF Downloads 492
22107 Comparative Analysis of Medical Tourism Industry among Key Nations in Southeast Asia

Authors: Nur A. Azmi, Suseela D. Chandran, Fadilah Puteh, Azizan Zainuddin

Abstract:

Medical tourism has been associated as a global phenomenon in developed and developing countries in the 21st century. Medical tourism is defined as an activity in which individuals who travel from one country to another country to seek or receive medical healthcare. Based on the global trend, the number of medical tourists is increasing annually, especially in the Southeast Asia (SEA) region. Since the establishment of Association of Southeast Asian Nations (ASEAN) in 1967, the SEA nations have worked towards regional integration in medical tourism. The medical tourism in the SEA has become the third-largest sector that contributes towards economic development. Previous research has demonstrated several factors that affect the development of medical tourism. However, despite the already published literature on SEA's medical tourism in the last ten years there continues to be a scarcity of research on niche areas each of the SEA countries. Hence, this paper is significant in enriching the literature in the field of medical tourism particularly in showcasing the niche market of medical tourism among the SEA best players namely Singapore, Thailand, Malaysia and Indonesia. This paper also contributes in offering a comparative analysis between the said nations whether they are complementing or competing with each other in the medical tourism sector. This then, will increase the availability of information in SEA region on medical tourism. The data was collected through an in-depth interview with various stakeholders and private hospitals. The data was then analyzed using two approaches namely thematic analysis (interview data) and document analysis (secondary data). The paper concludes by arguing that the ASEAN countries have specific niche market to promote their medical tourism industry. This paper also concludes that these key nations complement each other in the industry. In addition, the medical tourism sector in SEA region offers greater prospects for market development and expansion that witnessed the emerging of new key players from other nations.

Keywords: healthcare services, medical tourism, medical tourists, SEA region, comparative analysis

Procedia PDF Downloads 144
22106 Care: A Cluster Based Approach for Reliable and Efficient Routing Protocol in Wireless Sensor Networks

Authors: K. Prasanth, S. Hafeezullah Khan, B. Haribalakrishnan, D. Arun, S. Jayapriya, S. Dhivya, N. Vijayarangan

Abstract:

The main goal of our approach is to find the optimum positions for the sensor nodes, reinforcing the communications in points where certain lack of connectivity is found. Routing is the major problem in sensor network’s data transfer between nodes. We are going to provide an efficient routing technique to make data signal transfer to reach the base station soon without any interruption. Clustering and routing are the two important key factors to be considered in case of WSN. To carry out the communication from the nodes to their cluster head, we propose a parameterizable protocol so that the developer can indicate if the routing has to be sensitive to either the link quality of the nodes or the their battery levels.

Keywords: clusters, routing, wireless sensor networks, three phases, sensor networks

Procedia PDF Downloads 508
22105 Adaptive Process Monitoring for Time-Varying Situations Using Statistical Learning Algorithms

Authors: Seulki Lee, Seoung Bum Kim

Abstract:

Statistical process control (SPC) is a practical and effective method for quality control. The most important and widely used technique in SPC is a control chart. The main goal of a control chart is to detect any assignable changes that affect the quality output. Most conventional control charts, such as Hotelling’s T2 charts, are commonly based on the assumption that the quality characteristics follow a multivariate normal distribution. However, in modern complicated manufacturing systems, appropriate control chart techniques that can efficiently handle the nonnormal processes are required. To overcome the shortcomings of conventional control charts for nonnormal processes, several methods have been proposed to combine statistical learning algorithms and multivariate control charts. Statistical learning-based control charts, such as support vector data description (SVDD)-based charts, k-nearest neighbors-based charts, have proven their improved performance in nonnormal situations compared to that of the T2 chart. Beside the nonnormal property, time-varying operations are also quite common in real manufacturing fields because of various factors such as product and set-point changes, seasonal variations, catalyst degradation, and sensor drifting. However, traditional control charts cannot accommodate future condition changes of the process because they are formulated based on the data information recorded in the early stage of the process. In the present paper, we propose a SVDD algorithm-based control chart, which is capable of adaptively monitoring time-varying and nonnormal processes. We reformulated the SVDD algorithm into a time-adaptive SVDD algorithm by adding a weighting factor that reflects time-varying situations. Moreover, we defined the updating region for the efficient model-updating structure of the control chart. The proposed control chart simultaneously allows efficient model updates and timely detection of out-of-control signals. The effectiveness and applicability of the proposed chart were demonstrated through experiments with the simulated data and the real data from the metal frame process in mobile device manufacturing.

Keywords: multivariate control chart, nonparametric method, support vector data description, time-varying process

Procedia PDF Downloads 303
22104 In-situ Acoustic Emission Analysis of a Polymer Electrolyte Membrane Water Electrolyser

Authors: M. Maier, I. Dedigama, J. Majasan, Y. Wu, Q. Meyer, L. Castanheira, G. Hinds, P. R. Shearing, D. J. L. Brett

Abstract:

Increasing the efficiency of electrolyser technology is commonly seen as one of the main challenges on the way to the Hydrogen Economy. There is a significant lack of understanding of the different states of operation of polymer electrolyte membrane water electrolysers (PEMWE) and how these influence the overall efficiency. This in particular means the two-phase flow through the membrane, gas diffusion layers (GDL) and flow channels. In order to increase the efficiency of PEMWE and facilitate their spread as commercial hydrogen production technology, new analytic approaches have to be found. Acoustic emission (AE) offers the possibility to analyse the processes within a PEMWE in a non-destructive, fast and cheap in-situ way. This work describes the generation and analysis of AE data coming from a PEM water electrolyser, for, to the best of our knowledge, the first time in literature. Different experiments are carried out. Each experiment is designed so that only specific physical processes occur and AE solely related to one process can be measured. Therefore, a range of experimental conditions is used to induce different flow regimes within flow channels and GDL. The resulting AE data is first separated into different events, which are defined by exceeding the noise threshold. Each acoustic event consists of a number of consequent peaks and ends when the wave diminishes under the noise threshold. For all these acoustic events the following key attributes are extracted: maximum peak amplitude, duration, number of peaks, peaks before the maximum, average intensity of a peak and time till the maximum is reached. Each event is then expressed as a vector containing the normalized values for all criteria. Principal Component Analysis is performed on the resulting data, which orders the criteria by the eigenvalues of their covariance matrix. This can be used as an easy way of determining which criteria convey the most information on the acoustic data. In the following, the data is ordered in the two- or three-dimensional space formed by the most relevant criteria axes. By finding spaces in the two- or three-dimensional space only occupied by acoustic events originating from one of the three experiments it is possible to relate physical processes to certain acoustic patterns. Due to the complex nature of the AE data modern machine learning techniques are needed to recognize these patterns in-situ. Using the AE data produced before allows to train a self-learning algorithm and develop an analytical tool to diagnose different operational states in a PEMWE. Combining this technique with the measurement of polarization curves and electrochemical impedance spectroscopy allows for in-situ optimization and recognition of suboptimal states of operation.

Keywords: acoustic emission, gas diffusion layers, in-situ diagnosis, PEM water electrolyser

Procedia PDF Downloads 160
22103 An Exploratory Study on the Integration of Neurodiverse University Students into Mainstream Learning and Their Performance: The Case of the Jones Learning Center

Authors: George Kassar, Phillip A. Cartwright

Abstract:

Based on data collected from The Jones Learning Center (JLC), University of the Ozarks, Arkansas, U.S., this study explores the impact of inclusive classroom practices on neuro-diverse college students’ and their consequent academic performance having participated in integrative therapies designed to support students who are intellectually capable of obtaining a college degree, but who require support for learning challenges owing to disabilities, AD/HD, or ASD. The purpose of this study is two-fold. The first objective is to explore the general process, special techniques, and practices of the (JLC) inclusive program. The second objective is to identify and analyze the effectiveness of the processes, techniques, and practices in supporting the academic performance of enrolled college students with learning disabilities following integration into mainstream university learning. Integrity, transparency, and confidentiality are vital in the research. All questions were shared in advance and confirmed by the concerned management at the JLC. While administering the questionnaire as well as conducted the interviews, the purpose of the study, its scope, aims, and objectives were clearly explained to all participants prior starting the questionnaire / interview. Confidentiality of all participants assured and guaranteed by using encrypted identification of individuals, thus limiting access to data to only the researcher, and storing data in a secure location. Respondents were also informed that their participation in this research is voluntary, and they may withdraw from it at any time prior to submission if they wish. Ethical consent was obtained from the participants before proceeding with videorecording of the interviews. This research uses a mixed methods approach. The research design involves collecting, analyzing, and “mixing” quantitative and qualitative methods and data to enable a research inquiry. The research process is organized based on a five-pillar approach. The first three pillars are focused on testing the first hypothesis (H1) directed toward determining the extent to the academic performance of JLC students did improve after involvement with comprehensive JLC special program. The other two pillars relate to the second hypothesis (H2), which is directed toward determining the extent to which collective and applied knowledge at JLC is distinctive from typical practices in the field. The data collected for research were obtained from three sources: 1) a set of secondary data in the form of Grade Point Average (GPA) received from the registrar, 2) a set of primary data collected throughout structured questionnaire administered to students and alumni at JLC, and 3) another set of primary data collected throughout interviews conducted with staff and educators at JLC. The significance of this study is two folds. First, it validates the effectiveness of the special program at JLC for college-level students who learn differently. Second, it identifies the distinctiveness of the mix of techniques, methods, and practices, including the special individualized and personalized one-on-one approach at JLC.

Keywords: education, neuro-diverse students, program effectiveness, Jones learning center

Procedia PDF Downloads 77
22102 Relative Entropy Used to Determine the Divergence of Cells in Single Cell RNA Sequence Data Analysis

Authors: An Chengrui, Yin Zi, Wu Bingbing, Ma Yuanzhu, Jin Kaixiu, Chen Xiao, Ouyang Hongwei

Abstract:

Single cell RNA sequence (scRNA-seq) is one of the effective tools to study transcriptomics of biological processes. Recently, similarity measurement of cells is Euclidian distance or its derivatives. However, the process of scRNA-seq is a multi-variate Bernoulli event model, thus we hypothesize that it would be more efficient when the divergence between cells is valued with relative entropy than Euclidian distance. In this study, we compared the performances of Euclidian distance, Spearman correlation distance and Relative Entropy using scRNA-seq data of the early, medial and late stage of limb development generated in our lab. Relative Entropy is better than other methods according to cluster potential test. Furthermore, we developed KL-SNE, an algorithm modifying t-SNE whose definition of divergence between cells Euclidian distance to Kullback–Leibler divergence. Results showed that KL-SNE was more effective to dissect cell heterogeneity than t-SNE, indicating the better performance of relative entropy than Euclidian distance. Specifically, the chondrocyte expressing Comp was clustered together with KL-SNE but not with t-SNE. Surprisingly, cells in early stage were surrounded by cells in medial stage in the processing of KL-SNE while medial cells neighbored to late stage with the process of t-SNE. This results parallel to Heatmap which showed cells in medial stage were more heterogenic than cells in other stages. In addition, we also found that results of KL-SNE tend to follow Gaussian distribution compared with those of the t-SNE, which could also be verified with the analysis of scRNA-seq data from another study on human embryo development. Therefore, it is also an effective way to convert non-Gaussian distribution to Gaussian distribution and facilitate the subsequent statistic possesses. Thus, relative entropy is potentially a better way to determine the divergence of cells in scRNA-seq data analysis.

Keywords: Single cell RNA sequence, Similarity measurement, Relative Entropy, KL-SNE, t-SNE

Procedia PDF Downloads 342
22101 Secret Security Smart Lock Using Artificial Intelligence Hybrid Algorithm

Authors: Vahid Bayrami Rad

Abstract:

Ever since humans developed a collective way of life to the development of urbanization, the concern of security has always been considered one of the most important challenges of life. To protect property, locks have always been a practical tool. With the advancement of technology, the form of locks has changed from mechanical to electric. One of the most widely used fields of using artificial intelligence is its application in the technology of surveillance security systems. Currently, the technologies used in smart anti-theft door handles are one of the most potential fields for using artificial intelligence. Artificial intelligence has the possibility to learn, calculate, interpret and process by analyzing data with the help of algorithms and mathematical models and make smart decisions. We will use Arduino board to process data.

Keywords: arduino board, artificial intelligence, image processing, solenoid lock

Procedia PDF Downloads 71
22100 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 45
22099 Hydrogen: Contention-Aware Hybrid Memory Management for Heterogeneous CPU-GPU Architectures

Authors: Yiwei Li, Mingyu Gao

Abstract:

Integrating hybrid memories with heterogeneous processors could leverage heterogeneity in both compute and memory domains for better system efficiency. To ensure performance isolation, we introduce Hydrogen, a hardware architecture to optimize the allocation of hybrid memory resources to heterogeneous CPU-GPU systems. Hydrogen supports efficient capacity and bandwidth partitioning between CPUs and GPUs in both memory tiers. We propose decoupled memory channel mapping and token-based data migration throttling to enable flexible partitioning. We also support epoch-based online search for optimized configurations and lightweight reconfiguration with reduced data movements. Hydrogen significantly outperforms existing designs by 1.21x on average and up to 1.31x.

Keywords: hybrid memory, heterogeneous systems, dram cache, graphics processing units

Procedia PDF Downloads 109
22098 Computer-Based versus Paper-Based Tests: A Comparative Study of Two Types of Indonesian National Examination for Senior High School Students

Authors: Faizal Mansyur

Abstract:

The objective of this research is to find out whether there is a significant difference in the English language scores of senior high school students in the Indonesia National Examination for students tested by using computer-based and paper-based tests. The population of this research is senior high school students in South Sulawesi Province who sat the Indonesian National Examination for 2015/2016 academic year. The samples of this research are 800 students’ scores from 8 schools taken by employing the multistage random sampling technique. The data of this research is a secondary data since it is obtained from the education office for South Sulawesi. In analyzing the collected data, the researcher employed the independent samples T-Test with the help of SPSS v.24 program. The finding of this research reveals that there is a significant difference in the English language scores of senior high school students in the Indonesia National Examination for students tested by using computer-based and paper-based Tests (p < .05). Moreover, students tested by using PBT (Mean = 63.13, SD = 13.63) achieve higher score than those tested by using CBT (Mean = 46.33, SD = 14.68).

Keywords: computer-based test, paper-based test, Indonesian national examination, testing

Procedia PDF Downloads 172
22097 Close-Range Remote Sensing Techniques for Analyzing Rock Discontinuity Properties

Authors: Sina Fatolahzadeh, Sergio A. Sepúlveda

Abstract:

This paper presents advanced developments in close-range, terrestrial remote sensing techniques to enhance the characterization of rock masses. The study integrates two state-of-the-art laser-scanning technologies, the HandySCAN and GeoSLAM laser scanners, to extract high-resolution geospatial data for rock mass analysis. These instruments offer high accuracy, precision, low acquisition time, and high efficiency in capturing intricate geological features in small to medium size outcrops and slope cuts. Using the HandySCAN and GeoSLAM laser scanners facilitates real-time, three-dimensional mapping of rock surfaces, enabling comprehensive assessments of rock mass characteristics. The collected data provide valuable insights into structural complexities, surface roughness, and discontinuity patterns, which are essential for geological and geotechnical analyses. The synergy of these advanced remote sensing technologies contributes to a more precise and straightforward understanding of rock mass behavior. In this case, the main parameters of RQD, joint spacing, persistence, aperture, roughness, infill, weathering, water condition, and joint orientation in a slope cut along the Sea-to-Sky Highway, BC, were remotely analyzed to calculate and evaluate the Rock Mass Rating (RMR) and Geological Strength Index (GSI) classification systems. Automatic and manual analyses of the acquired data are then compared with field measurements. The results show the usefulness of the proposed remote sensing methods and their appropriate conformity with the actual field data.

Keywords: remote sensing, rock mechanics, rock engineering, slope stability, discontinuity properties

Procedia PDF Downloads 70
22096 Further Investigation of α+12C and α+16O Elastic Scattering

Authors: Sh. Hamada

Abstract:

The current work aims to study the rainbow like-structure observed in the elastic scattering of alpha particles on both 12C and 16O nuclei. We reanalyzed the experimental elastic scattering angular distributions data for α+12C and α+16O nuclear systems at different energies using both optical model and double folding potential of different interaction models such as: CDM3Y1, DDM3Y1, CDM3Y6 and BDM3Y1. Potential created by BDM3Y1 interaction model has the shallowest depth which reflects the necessity to use higher renormalization factor (Nr). Both optical model and double folding potential of different interaction models fairly reproduce the experimental data.

Keywords: density distribution, double folding, elastic scattering, nuclear rainbow, optical model

Procedia PDF Downloads 238
22095 Energy Consumption and Economic Growth: Testimony of Selected Sub-Saharan Africa Countries

Authors: Alfred Quarcoo

Abstract:

The main purpose of this paper is to examine the causal relationship between energy consumption and economic growth in Sub-Saharan Africa using panel data techniques. An annual data on energy consumption and Economic Growth (proxied by real gross domestic product per capita) spanning from 1990 to 2016 from the World bank index database was used. The results of the Augmented Dickey–Fuller unit root test shows that the series for all countries are not stationary at levels. However, the log of economic growth in Benin and Congo become stationary after taking the differences of the data, and log of energy consumption become stationary for all countries and Log of economic growth in Kenya and Zimbabwe were found to be stationary after taking the second differences of the panel series. The findings of the Johansen cointegration test demonstrate that the variables Log of Energy Consumption and Log of economic growth are not co-integrated for the cases of Kenya and Zimbabwe, so no long-run relationship between the variables were established in any country. The Granger causality test indicates that there is a unidirectional causality running from energy use to economic growth in Kenya and no causal linkage between Energy consumption and economic growth in Benin, Congo and Zimbabwe.

Keywords: Cointegration, Granger Causality, Sub-Sahara Africa, World Bank Development Indicators

Procedia PDF Downloads 58
22094 Time Travel Testing: A Mechanism for Improving Renewal Experience

Authors: Aritra Majumdar

Abstract:

While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.

Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas

Procedia PDF Downloads 164
22093 Electronic Data Interchange (EDI) in the Supply Chain: Impact on Customer Satisfaction

Authors: Hicham Amine, Abdelouahab Mesnaoui

Abstract:

Electronic data interchange EDI is the computer-to-computer exchange of structured business information. This information typically takes the form of standardized electronic business documents, such as invoices, purchase orders, bills of lading, and so on. The purpose of this study is to identify the impact EDI might have on supply chain and typically on customer satisfaction keeping in mind the constraints the organization might face. This study included 139 subject matter experts (SMEs) who participated by responding to a survey that was distributed. 85% responded that they are extremely for the implementation while 10% were neutral and 5% were against the implementation. From the quality assurance department, we have got 75% from the clients agreed to move on with the change whereas 10% stayed neutral and finally 15% were against the change. From the legal department where 80% of the answers were for the implementation and 10% of the participants stayed neutral whereas the last 10% were against it. The survey consisted of 40% male and 60% female (sex-ratio (F/M=1,5), who had chosen to participate. Our survey also contained 3 categories in terms of technical background where 80% are from technical background and 15% were from nontechnical background and 5% had some average technical background. This study examines the impact of EDI on customer satisfaction which is the primary hypothesis and justifies the importance of the implementation which enhances the customer satisfaction.

Keywords: electronic data interchange, supply chain, subject matter experts, customer satisfaction

Procedia PDF Downloads 343
22092 Accelerating Side Channel Analysis with Distributed and Parallelized Processing

Authors: Kyunghee Oh, Dooho Choi

Abstract:

Although there is no theoretical weakness in a cryptographic algorithm, Side Channel Analysis can find out some secret data from the physical implementation of a cryptosystem. The analysis is based on extra information such as timing information, power consumption, electromagnetic leaks or even sound which can be exploited to break the system. Differential Power Analysis is one of the most popular analyses, as computing the statistical correlations of the secret keys and power consumptions. It is usually necessary to calculate huge data and takes a long time. It may take several weeks for some devices with countermeasures. We suggest and evaluate the methods to shorten the time to analyze cryptosystems. Our methods include distributed computing and parallelized processing.

Keywords: DPA, distributed computing, parallelized processing, side channel analysis

Procedia PDF Downloads 432
22091 Monitoring of Hydrological Parameters in the Alexandra Jukskei Catchment in South Africa

Authors: Vhuhwavho Gadisi, Rebecca Alowo, German Nkhonjera

Abstract:

It has been noted that technical programming for handling groundwater resources is not accessible. The lack of these systems hinders groundwater management processes necessary for decision-making through monitoring and evaluation regarding the Jukskei River of the Crocodile River (West) Basin in Johannesburg, South Africa. Several challenges have been identified in South Africa's Jukskei Catchment concerning groundwater management. Some of those challenges will include the following: Gaps in data records; there is a need for training and equipping of monitoring staff; formal accreditation of monitoring capacities and equipment; there is no access to regulation terms (e.g., meters). Taking into consideration necessities and human requirements as per typical densities in various regions of South Africa, there is a need to construct several groundwater level monitoring stations in a particular segment; the available raw data on groundwater level should be converted into consumable products for example, short reports on delicate areas (e.g., Dolomite compartments, wetlands, aquifers, and sole source) and considering the increasing civil unrest there has been vandalism and theft of groundwater monitoring infrastructure. GIS was employed at the catchment level to plot the relationship between those identified groundwater parameters in the catchment area and the identified borehole. GIS-based maps were designed for groundwater monitoring to be pretested on one borehole in the Jukskei catchment. This data will be used to establish changes in the borehole compared to changes in the catchment area according to identified parameters.

Keywords: GIS, monitoring, Jukskei, catchment

Procedia PDF Downloads 98
22090 Transportation Mode Classification Using GPS Coordinates and Recurrent Neural Networks

Authors: Taylor Kolody, Farkhund Iqbal, Rabia Batool, Benjamin Fung, Mohammed Hussaeni, Saiqa Aleem

Abstract:

The rising threat of climate change has led to an increase in public awareness and care about our collective and individual environmental impact. A key component of this impact is our use of cars and other polluting forms of transportation, but it is often difficult for an individual to know how severe this impact is. While there are applications that offer this feedback, they require manual entry of what transportation mode was used for a given trip, which can be burdensome. In order to alleviate this shortcoming, a data from the 2016 TRIPlab datasets has been used to train a variety of machine learning models to automatically recognize the mode of transportation. The accuracy of 89.6% is achieved using single deep neural network model with Gated Recurrent Unit (GRU) architecture applied directly to trip data points over 4 primary classes, namely walking, public transit, car, and bike. These results are comparable in accuracy to results achieved by others using ensemble methods and require far less computation when classifying new trips. The lack of trip context data, e.g., bus routes, bike paths, etc., and the need for only a single set of weights make this an appropriate methodology for applications hoping to reach a broad demographic and have responsive feedback.

Keywords: classification, gated recurrent unit, recurrent neural network, transportation

Procedia PDF Downloads 141
22089 Audit of TPS photon beam dataset for small field output factors using OSLDs against RPC standard dataset

Authors: Asad Yousuf

Abstract:

Purpose: The aim of the present study was to audit treatment planning system beam dataset for small field output factors against standard dataset produced by radiological physics center (RPC) from a multicenter study. Such data are crucial for validity of special techniques, i.e., IMRT or stereotactic radiosurgery. Materials/Method: In this study, multiple small field size output factor datasets were measured and calculated for 6 to 18 MV x-ray beams using the RPC recommend methods. These beam datasets were measured at 10 cm depth for 10 × 10 cm2 to 2 × 2 cm2 field sizes, defined by collimator jaws at 100 cm. The measurements were made with a Landauer’s nanoDot OSLDs whose volume is small enough to gather a full ionization reading even for the 1×1 cm2 field size. At our institute the beam data including output factors have been commissioned at 5 cm depth with an SAD setup. For comparison with the RPC data, the output factors were converted to an SSD setup using tissue phantom ratios. SSD setup also enables coverage of the ion chamber in 2×2 cm2 field size. The measured output factors were also compared with those calculated by Eclipse™ treatment planning software. Result: The measured and calculated output factors are in agreement with RPC dataset within 1% and 4% respectively. The large discrepancies in TPS reflect the increased challenge in converting measured data into a commissioned beam model for very small fields. Conclusion: OSLDs are simple, durable, and accurate tool to verify doses that delivered using small photon beam fields down to a 1x1 cm2 field sizes. The study emphasizes that the treatment planning system should always be evaluated for small field out factors for the accurate dose delivery in clinical setting.

Keywords: small field dosimetry, optically stimulated luminescence, audit treatment, radiological physics center

Procedia PDF Downloads 331
22088 Relationship of Indoor and Outdoor Levels of Black Carbon in an Urban Environment

Authors: Daria Pashneva, Julija Pauraite, Agne Minderyte, Vadimas Dudoitis, Lina Davuliene, Kristina Plauskaite, Inga Garbariene, Steigvile Bycenkiene

Abstract:

Black carbon (BC) has received particular attention around the world, not only for its impact on regional and global climate change but also for its impact on air quality and public health. In order to study the relationship between indoor and outdoor BC concentrations, studies were carried out in Vilnius, Lithuania. The studies are aimed at determining the relationship of concentrations, identifying dependencies during the day and week with a further opportunity to analyze the key factors affecting the indoor concentration of BC. In this context, indoor and outdoor continuous real-time measurements of optical BC-related light absorption by aerosol particles were carried out during the cold season (from October to December 2020). The measurement venue was an office located in an urban background environment. Equivalent black carbon (eBC) mass concentration was measured by an Aethalometer (Magee Scientific, model AE-31). The optical transmission of carbonaceous aerosol particles was measured sequentially at seven wavelengths (λ= 370, 470, 520, 590, 660, 880, and 950 nm), where the eBC mass concentration was derived from the light absorption coefficient (σab) at 880 nm wavelength. The diurnal indoor eBC mass concentration was found to vary in the range from 0.02 to 0.08 µgm⁻³, while the outdoor eBC mass concentration - from 0.34 to 0.99 µgm⁻³. Diurnal variations of eBC mass concentration outdoor vs. indoor showed an increased contribution during 10:00 and 12:00 AM (GMT+2), with the highest indoor eBC mass concentration of 0.14µgm⁻³. An indoor/outdoor eBC ratio (I/O) was below one throughout the entire measurement period. The weekend levels of eBC mass concentration were lower than in weekdays for indoor and outdoor for 33% and 28% respectively. Hourly mean mass concentrations of eBC for weekdays and weekends show diurnal cycles, which could be explained by the periodicity of traffic intensity and heating activities. The results show a moderate influence of outdoor eBC emissions on the indoor eBC level.

Keywords: black carbon, climate change, indoor air quality, I/O ratio

Procedia PDF Downloads 205