Search results for: maximal data sets
25282 Algorithms used in Spatial Data Mining GIS
Authors: Vahid Bairami Rad
Abstract:
Extracting knowledge from spatial data like GIS data is important to reduce the data and extract information. Therefore, the development of new techniques and tools that support the human in transforming data into useful knowledge has been the focus of the relatively new and interdisciplinary research area ‘knowledge discovery in databases’. Thus, we introduce a set of database primitives or basic operations for spatial data mining which are sufficient to express most of the spatial data mining algorithms from the literature. This approach has several advantages. Similar to the relational standard language SQL, the use of standard primitives will speed-up the development of new data mining algorithms and will also make them more portable. We introduced a database-oriented framework for spatial data mining which is based on the concepts of neighborhood graphs and paths. A small set of basic operations on these graphs and paths were defined as database primitives for spatial data mining. Furthermore, techniques to efficiently support the database primitives by a commercial DBMS were presented.Keywords: spatial data base, knowledge discovery database, data mining, spatial relationship, predictive data mining
Procedia PDF Downloads 46025281 Application of Support Vector Machines in Forecasting Non-Residential
Authors: Wiwat Kittinaraporn, Napat Harnpornchai, Sutja Boonyachut
Abstract:
This paper deals with the application of a novel neural network technique, so-called Support Vector Machine (SVM). The objective of this study is to explore the variable and parameter of forecasting factors in the construction industry to build up forecasting model for construction quantity in Thailand. The scope of the research is to study the non-residential construction quantity in Thailand. There are 44 sets of yearly data available, ranging from 1965 to 2009. The correlation between economic indicators and construction demand with the lag of one year was developed by Apichat Buakla. The selected variables are used to develop SVM models to forecast the non-residential construction quantity in Thailand. The parameters are selected by using ten-fold cross-validation method. The results are indicated in term of Mean Absolute Percentage Error (MAPE). The MAPE value for the non-residential construction quantity predicted by Epsilon-SVR in corporation with Radial Basis Function (RBF) of kernel function type is 5.90. Analysis of the experimental results show that the support vector machine modelling technique can be applied to forecast construction quantity time series which is useful for decision planning and management purpose.Keywords: forecasting, non-residential, construction, support vector machines
Procedia PDF Downloads 43425280 Using Machine Learning to Classify Different Body Parts and Determine Healthiness
Authors: Zachary Pan
Abstract:
Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.Keywords: body part, healthcare, machine learning, neural networks
Procedia PDF Downloads 10325279 Data Stream Association Rule Mining with Cloud Computing
Authors: B. Suraj Aravind, M. H. M. Krishna Prasad
Abstract:
There exist emerging applications of data streams that require association rule mining, such as network traffic monitoring, web click streams analysis, sensor data, data from satellites etc. Data streams typically arrive continuously in high speed with huge amount and changing data distribution. This raises new issues that need to be considered when developing association rule mining techniques for stream data. This paper proposes to introduce an improved data stream association rule mining algorithm by eliminating the limitation of resources. For this, the concept of cloud computing is used. Inclusion of this may lead to additional unknown problems which needs further research.Keywords: data stream, association rule mining, cloud computing, frequent itemsets
Procedia PDF Downloads 50125278 Wind Wave Modeling Using MIKE 21 SW Spectral Model
Authors: Pouya Molana, Zeinab Alimohammadi
Abstract:
Determining wind wave characteristics is essential for implementing projects related to Coastal and Marine engineering such as designing coastal and marine structures, estimating sediment transport rates and coastal erosion rates in order to predict significant wave height (H_s), this study applies the third generation spectral wave model, Mike 21 SW, along with CEM model. For SW model calibration and verification, two data sets of meteorology and wave spectroscopy are used. The model was exposed to time-varying wind power and the results showed that difference ratio mean, standard deviation of difference ratio and correlation coefficient in SW model for H_s parameter are 1.102, 0.279 and 0.983, respectively. Whereas, the difference ratio mean, standard deviation and correlation coefficient in The Choice Experiment Method (CEM) for the same parameter are 0.869, 1.317 and 0.8359, respectively. Comparing these expected results it is revealed that the Choice Experiment Method CEM has more errors in comparison to MIKE 21 SW third generation spectral wave model and higher correlation coefficient does not necessarily mean higher accuracy.Keywords: MIKE 21 SW, CEM method, significant wave height, difference ratio
Procedia PDF Downloads 40125277 Signs, Signals and Syndromes: Algorithmic Surveillance and Global Health Security in the 21st Century
Authors: Stephen L. Roberts
Abstract:
This article offers a critical analysis of the rise of syndromic surveillance systems for the advanced detection of pandemic threats within contemporary global health security frameworks. The article traces the iterative evolution and ascendancy of three such novel syndromic surveillance systems for the strengthening of health security initiatives over the past two decades: 1) The Program for Monitoring Emerging Diseases (ProMED-mail); 2) The Global Public Health Intelligence Network (GPHIN); and 3) HealthMap. This article demonstrates how each newly introduced syndromic surveillance system has become increasingly oriented towards the integration of digital algorithms into core surveillance capacities to continually harness and forecast upon infinitely generating sets of digital, open-source data, potentially indicative of forthcoming pandemic threats. This article argues that the increased centrality of the algorithm within these next-generation syndromic surveillance systems produces a new and distinct form of infectious disease surveillance for the governing of emergent pathogenic contingencies. Conceptually, the article also shows how the rise of this algorithmic mode of infectious disease surveillance produces divergences in the governmental rationalities of global health security, leading to the rise of an algorithmic governmentality within contemporary contexts of Big Data and these surveillance systems. Empirically, this article demonstrates how this new form of algorithmic infectious disease surveillance has been rapidly integrated into diplomatic, legal, and political frameworks to strengthen the practice of global health security – producing subtle, yet distinct shifts in the outbreak notification and reporting transparency of states, increasingly scrutinized by the algorithmic gaze of syndromic surveillance.Keywords: algorithms, global health, pandemic, surveillance
Procedia PDF Downloads 18325276 Preliminary Evaluation of Maximum Intensity Projection SPECT Imaging for Whole Body Tc-99m Hydroxymethylene Diphosphonate Bone Scanning
Authors: Yasuyuki Takahashi, Hirotaka Shimada, Kyoko Saito
Abstract:
Bone scintigraphy is widely used as a screening tool for bone metastases. However, the 180 to 240 minutes (min) waiting time after the intravenous (i.v.) injection of the tracer is both long and tiresome. To solve this shortcoming, a bone scan with a shorter waiting time is needed. In this study, we applied the Maximum Intensity Projection (MIP) and triple energy window (TEW) scatter correction to a whole body bone SPECT (Merged SPECT) and investigated shortening the waiting time. Methods: In a preliminary phantom study, hot gels of 99mTc-HMDP were inserted into sets of rods with diameters ranging from 4 to 19 mm. Each rod set covered a sector of a cylindrical phantom. The activity concentration of all rods was 2.5 times that of the background in the cylindrical body of the phantom. In the human study, SPECT images were obtained from chest to abdomen at 30 to 180 min after 99mTc- hydroxymethylene diphosphonate (HMDP) injection of healthy volunteers. For both studies, MIP images were reconstructed. Planar whole body images of the patients were also obtained. These were acquired at 200 min. The image quality of the SPECT and the planar images was compared. Additionally, 36 patients with breast cancer were scanned in the same way. The delectability of uptake regions (metastases) was compared visually. Results: In the phantom study, a 4 mm size hot gel was difficult to depict on the conventional SPECT, but MIP images could recognize it clearly. For both the healthy volunteers and the clinical patients, the accumulation of 99mTc-HMDP in the SPECT was good as early as 90 min. All findings of both image sets were in agreement. Conclusion: In phantoms, images from MIP with TEW scatter correction could detect all rods down to those with a diameter of 4 mm. In patients, MIP reconstruction with TEW scatter correction could improve the detectability of hot lesions. In addition, the time between injection and imaging could be shortened from that conventionally used for whole body scans.Keywords: merged SPECT, MIP, TEW scatter correction, 99mTc-HMDP
Procedia PDF Downloads 41125275 Using Maximization Entropy in Developing a Filipino Phonetically Balanced Wordlist for a Phoneme-Level Speech Recognition System
Authors: John Lorenzo Bautista, Yoon-Joong Kim
Abstract:
In this paper, a set of Filipino Phonetically Balanced Word list consisting of 250 words (PBW250) were constructed for a phoneme-level ASR system for the Filipino language. The Entropy Maximization is used to obtain phonological balance in the list. Entropy of phonemes in a word is maximized, providing an optimal balance in each word’s phonological distribution using the Add-Delete Method (PBW algorithm) and is compared to the modified PBW algorithm implemented in a dynamic algorithm approach to obtain optimization. The gained entropy score of 4.2791 and 4.2902 for the PBW and modified algorithm respectively. The PBW250 was recorded by 40 respondents, each with 2 sets data. Recordings from 30 respondents were trained to produce an acoustic model that were tested using recordings from 10 respondents using the HMM Toolkit (HTK). The results of test gave the maximum accuracy rate of 97.77% for a speaker dependent test and 89.36% for a speaker independent test.Keywords: entropy maximization, Filipino language, Hidden Markov Model, phonetically balanced words, speech recognition
Procedia PDF Downloads 45725274 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection
Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa
Abstract:
Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.Keywords: classification, airborne LiDAR, parameters selection, support vector machine
Procedia PDF Downloads 14725273 Simulation of Reflection Loss for Carbon and Nickel-Carbon Thin Films
Authors: M. Emami, R. Tarighi, R. Goodarzi
Abstract:
Maximal radar wave absorbing cannot be achieved by shaping alone. We have to focus on the parameters of absorbing materials such as permittivity, permeability, and thickness so that best absorbing according to our necessity can happen. The real and imaginary parts of the relative complex permittivity (εr' and εr") and permeability (µr' and µr") were obtained by simulation. The microwave absorbing property of carbon and Ni(C) is simulated in this study by MATLAB software; the simulation was in the frequency range between 2 to 12 GHz for carbon black (C), and carbon coated nickel (Ni(C)) with different thicknesses. In fact, we draw reflection loss (RL) for C and Ni-C via frequency. We have compared their absorption for 3-mm thickness and predicted for other thicknesses by using of electromagnetic wave transmission theory. The results showed that reflection loss position changes in low frequency with increasing of thickness. We found out that, in all cases, using nanocomposites as absorbance cannot get better results relative to pure nanoparticles. The frequency where absorption is maximum can determine the best choice between nanocomposites and pure nanoparticles. Also, we could find an optimal thickness for long wavelength absorbing in order to utilize them in protecting shields and covering.Keywords: absorbing, carbon, carbon nickel, frequency, thicknesses
Procedia PDF Downloads 18625272 A Comprehensive Survey and Improvement to Existing Privacy Preserving Data Mining Techniques
Authors: Tosin Ige
Abstract:
Ethics must be a condition of the world, like logic. (Ludwig Wittgenstein, 1889-1951). As important as data mining is, it possess a significant threat to ethics, privacy, and legality, since data mining makes it difficult for an individual or consumer (in the case of a company) to control the accessibility and usage of his data. This research focuses on Current issues and the latest research and development on Privacy preserving data mining methods as at year 2022. It also discusses some advances in those techniques while at the same time highlighting and providing a new technique as a solution to an existing technique of privacy preserving data mining methods. This paper also bridges the wide gap between Data mining and the Web Application Programing Interface (web API), where research is urgently needed for an added layer of security in data mining while at the same time introducing a seamless and more efficient way of data mining.Keywords: data, privacy, data mining, association rule, privacy preserving, mining technique
Procedia PDF Downloads 17225271 The Anti-Bladder Cancer Effects Exerted by Hyaluronan Nanoparticles Encapsulated Heteronemin Isolated from Hippospongia Sp.
Authors: Kuan Yin Hsiao, Shyh Ming Kuo, Yi Jhen Wu, Chin Wen Chuang, Chuen-Fu Lin, Wei-qing Yang, Han Hsiang Huang
Abstract:
Anti-tumor effects of natural products, like compounds from marine sponges and soft corals, have been investigated for decades. Polymeric nanoparticles prepared from biodegradable and biocompatible molecules, such as Hyaluronan (HA), Chitosan (CHI) and gelatin have been widely studied. Encapsulation of anti-cancer therapies by the biopolymeric nanoparticles in drug delivery system is potentially capable of improving the therapeutic effects and attenuating their toxicity. In the current study, the anti-bladder cancer effects of heteronemin extracted from the sponge Hippospongia sp. with or without HA and CHI nanoparticle encapsulation were assessed and evaluated in vitro. Results showed that IC50 (half maximal inhibitory concentration) of heteronemin toward T24 human bladder cancer cell viability is approximately 0.18 µg/mL. Both plain and HA nanoparticles-encapsulated heteronemin at 0.2 and 0.4 µg/mL significantly reduced T24 cell viability (P<0.001) while HA nanoparticles-encapsulated heteronemin showed weaker viability-inhibitory effects on L929 fibroblasts compared with plain heteronemin at the identical concentrations. HA and CHI nanoparticles-encapsulated heteronemin exhibited significantly stronger inhibitory effects against migration of T24 human bladder cancer cell than those exerted by plain heteronemin at the same concentrations (P<0.001). The flow cytometric results showed that 0.2 µg/mL HA and CHI nanoparticles-encapsulated heteronemin induced higher early apoptosis rate than that induced by plain heteronemin at the same concentration. These results show that HA and CHI nanoparticle encapsulation is able to elevate anti-migratory and apoptosis-inducing effects exerted by heteronemin against bladder cancer cells in vitro. The in vivo anti-bladder cancer effects of the compound with or without HA/CHI nanoparticle encapsulation will be further investigated and examined using murine tumor models. The data obtained from this study will extensively evaluate of the anti-bladder cancer effects of heteronemin as well as HA/CHI-encapsulated heteronemin and pave a way to develop potential bladder cancer treatment.Keywords: heteronemin, nanoparticles, hyaluronan, chitosan, bladder cancer
Procedia PDF Downloads 45625270 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data
Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine
Procedia PDF Downloads 24025269 Semantic Data Schema Recognition
Authors: Aïcha Ben Salem, Faouzi Boufares, Sebastiao Correia
Abstract:
The subject covered in this paper aims at assisting the user in its quality approach. The goal is to better extract, mix, interpret and reuse data. It deals with the semantic schema recognition of a data source. This enables the extraction of data semantics from all the available information, inculding the data and the metadata. Firstly, it consists of categorizing the data by assigning it to a category and possibly a sub-category, and secondly, of establishing relations between columns and possibly discovering the semantics of the manipulated data source. These links detected between columns offer a better understanding of the source and the alternatives for correcting data. This approach allows automatic detection of a large number of syntactic and semantic anomalies.Keywords: schema recognition, semantic data profiling, meta-categorisation, semantic dependencies inter columns
Procedia PDF Downloads 41825268 Optimal Economic Restructuring Aimed at an Optimal Increase in GDP Constrained by a Decrease in Energy Consumption and CO2 Emissions
Authors: Alexander Vaninsky
Abstract:
The objective of this paper is finding the way of economic restructuring - that is, change in the shares of sectoral gross outputs - resulting in the maximum possible increase in the gross domestic product (GDP) combined with decreases in energy consumption and CO2 emissions. It uses an input-output model for the GDP and factorial models for the energy consumption and CO2 emissions to determine the projection of the gradient of GDP, and the antigradients of the energy consumption and CO2 emissions, respectively, on a subspace formed by the structure-related variables. Since the gradient (antigradient) provides a direction of the steepest increase (decrease) of the objective function, and their projections retain this property for the functions' limitation to the subspace, each of the three directional vectors solves a particular problem of optimal structural change. In the next step, a type of factor analysis is applied to find a convex combination of the projected gradient and antigradients having maximal possible positive correlation with each of the three. This convex combination provides the desired direction of the structural change. The national economy of the United States is used as an example of applications.Keywords: economic restructuring, input-output analysis, divisia index, factorial decomposition, E3 models
Procedia PDF Downloads 31425267 Access Control System for Big Data Application
Authors: Winfred Okoe Addy, Jean Jacques Dominique Beraud
Abstract:
Access control systems (ACs) are some of the most important components in safety areas. Inaccuracies of regulatory frameworks make personal policies and remedies more appropriate than standard models or protocols. This problem is exacerbated by the increasing complexity of software, such as integrated Big Data (BD) software for controlling large volumes of encrypted data and resources embedded in a dedicated BD production system. This paper proposes a general access control strategy system for the diffusion of Big Data domains since it is crucial to secure the data provided to data consumers (DC). We presented a general access control circulation strategy for the Big Data domain by describing the benefit of using designated access control for BD units and performance and taking into consideration the need for BD and AC system. We then presented a generic of Big Data access control system to improve the dissemination of Big Data.Keywords: access control, security, Big Data, domain
Procedia PDF Downloads 13425266 A Data Envelopment Analysis Model in a Multi-Objective Optimization with Fuzzy Environment
Authors: Michael Gidey Gebru
Abstract:
Most of Data Envelopment Analysis models operate in a static environment with input and output parameters that are chosen by deterministic data. However, due to ambiguity brought on shifting market conditions, input and output data are not always precisely gathered in real-world scenarios. Fuzzy numbers can be used to address this kind of ambiguity in input and output data. Therefore, this work aims to expand crisp Data Envelopment Analysis into Data Envelopment Analysis with fuzzy environment. In this study, the input and output data are regarded as fuzzy triangular numbers. Then, the Data Envelopment Analysis model with fuzzy environment is solved using a multi-objective method to gauge the Decision Making Units' efficiency. Finally, the developed Data Envelopment Analysis model is illustrated with an application on real data 50 educational institutions.Keywords: efficiency, Data Envelopment Analysis, fuzzy, higher education, input, output
Procedia PDF Downloads 5725265 Synthesis, Biological Evaluation and Molecular Modeling Studies on Chiral Chloroquine Analogues as Antimalarial Agents
Authors: Srinivasarao Kondaparla, Utsab Debnath, Awakash Soni, Vasantha Rao Dola, Manish Sinha, Kumkum Kumkum Srivastava, Sunil K. Puri, Seturam B. Katti
Abstract:
In a focused exploration, we have designed synthesized and biologically evaluated chiral conjugated new chloroquine (CQ) analogs with substituted piperazines as antimalarial agents. In vitro as well as in vivo studies revealed that compound 7c showed potent activity [for in vitro IC₅₀= 56.98nM (3D7), 97.76nM (K1); for in vivo (up to at the dose of 12.5 mg/kg); SI = 3510] as a new lead of antimalarial agent. Other compounds 6b, 6d, 7d, 7h, 8c, 8d, 9a, and 9c are also showing moderate activity against CQ-sensitive (3D7) strain and superior activity against resistant (K1) strain of P. falciparum. Furthermore, we have carried out docking and 3D-QSAR studies of all in-house data sets (168 molecules) of chiral CQ analogs to explain the structure activity relationships (SAR). Our new findings specified the significance of H-bond interaction with the side chain of heme for biological activity. In addition, the 3D-QSAR study against 3D7 strain indicated the favorable and unfavorable sites of CQ analogs for incorporating steric, hydrophobic and electropositive groups to improve the antimalarial activity.Keywords: piperazines, CQ-sensitive strain-3D7, in-vitro and in-vivo assay, docking, 3D-QSAR
Procedia PDF Downloads 17125264 Parallel Multisplitting Methods for Differential Systems
Authors: Malika El Kyal, Ahmed Machmoum
Abstract:
We prove the superlinear convergence of asynchronous multi-splitting methods applied to differential equations. This study is based on the technique of nested sets. It permits to specify kind of the convergence in the asynchronous mode.The main characteristic of an asynchronous mode is that the local algorithm not have to wait at predetermined messages to become available. We allow some processors to communicate more frequently than others, and we allow the communication delays to be substantial and unpredictable. Note that synchronous algorithms in the computer science sense are particular cases of our formulation of asynchronous one.Keywords: parallel methods, asynchronous mode, multisplitting, ODE
Procedia PDF Downloads 52625263 Challenges of Skill Training among Women with Intellectual Disability: Stakeholders' Perspective
Authors: Jayanti Pujari
Abstract:
The present study attempts to find out the barriers faced by adult women with an Intellectual disability during their training at vocational training centres offered by rehabilitation institutes. As economic independence is the ultimate aim of rehabilitation, this study tries to focus on the barriers which restrict the adult women with intellectual disability in equipping themselves in required skill which can really empower them and help them in independent living. The objectives of the study are (1) To find out the barriers perceived by job coaches during training given to women with intellectual disability (2) To find out the barriers perceived by the parents of women with intellectual disability who are undergoing vocational training and (3) To find out the barriers perceived by the women with intellectual disabilities during the vocational training. The barriers have been operationalised in the present study from three perspectives such as behavioural barriers, competency related barriers and accessibility barriers. For the present study three groups of participants(N=60) have been selected through purposive nonprobability sampling procedure to generate the data. They are( 20) job coaches who are working at vocational centres, (20) parents of women with intellectual disabilities, (20) adult women with intellectual disabilities. The study followed a descriptive research design and data are generated through self developed questionnaire. Three sets of self-developed and face validated questionnaires were used as the tool to gather the data from the three categories of sample. The questionnaire has 30 close ended questions and the respondents have to answer on a three point scale (yes, no, need help). Both qualitative and quantitative analysis was conducted to test the hypothesis. The major findings of the study depict that the 87% of the women with intellectual disability perceived highest barriers related to competency whereas barriers related to behaviour and accessibility are perceived lowest. 92% of job coaches perceived barriers related to competencies and accessibility are highest which hinder the effectiveness of skill development of women with intellectual disability and 74% of the parents of adult women with intellectual disability also opines that the barriers related to competencies and accessibility are highest. In conclusion, it is stressed that there is need to create awareness among the stakeholders about the training and management strategies of skill training and positive behaviour support which will surely enable the adult women with intellectual disability to utilise their residual skill and acquire training to become economically independent.Keywords: economic independence, intellectual disability, skill development, training barrier
Procedia PDF Downloads 22225262 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4
Authors: Ryan A. Black, Stacey A. McCaffrey
Abstract:
Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.Keywords: instrument development, item response theory, latent trait theory, psychometrics
Procedia PDF Downloads 35625261 Reinforcement Learning for Quality-Oriented Production Process Parameter Optimization Based on Predictive Models
Authors: Akshay Paranjape, Nils Plettenberg, Robert Schmitt
Abstract:
Producing faulty products can be costly for manufacturing companies and wastes resources. To reduce scrap rates in manufacturing, process parameters can be optimized using machine learning. Thus far, research mainly focused on optimizing specific processes using traditional algorithms. To develop a framework that enables real-time optimization based on a predictive model for an arbitrary production process, this study explores the application of reinforcement learning (RL) in this field. Based on a thorough review of literature about RL and process parameter optimization, a model based on maximum a posteriori policy optimization that can handle both numerical and categorical parameters is proposed. A case study compares the model to state–of–the–art traditional algorithms and shows that RL can find optima of similar quality while requiring significantly less time. These results are confirmed in a large-scale validation study on data sets from both production and other fields. Finally, multiple ways to improve the model are discussed.Keywords: reinforcement learning, production process optimization, evolutionary algorithms, policy optimization, actor critic approach
Procedia PDF Downloads 9725260 Electromyography Analysis during Walking and Seated Stepping in the Elderly
Authors: P. Y. Chiang, Y. H. Chen, Y. J. Lin, C. C. Chang, W. C. Hsu
Abstract:
The number of the elderly in the world population and the rate of falls in this increasing numbers of older people are increasing. Decreasing muscle strength and an increasing risk of falling are associated with the ageing process. Because the effects of seated stepping training on the walking performance in the elderly remain unclear, the main purpose of the proposed study is to perform electromyography analysis during walking and seated stepping in the elderly. Four surface EMG electrodes were sticked on the surface of lower limbs muscles, including vastus lateralis (VL), and gastrocnemius (GT) of both sides. Before test, maximal voluntary contraction (MVC) of the respective muscle was obtained using manual muscle testing. The analog raw data of EMG signals were digitized with a sampling frequency of 2000 Hz. The signals were fully rectified and the linear envelope were calculated. Stepping motion cycle was separated into two phases by stepping timing (ST) and pedal return timing (PRT). ST refer to the time when the pedal marker reached the highest height, representing the contra-lateral leg was going to release the pedal. PRT refer to the time when the pedal marker reached the lowest height, representing the contra-lateral leg was going to step the pedal. We assumed that ST acted the same role in initial contact during walking, and PRT for toe-off. The period from ST to next PRT was called pushing phase (PP), during which the leg would start to step with resistance, and we compare this phase with the stance phase in level walking. The period from PRT to next ST was called returning phase (RP), during which leg would not have any resistance in this phase, and we compare this phase with the swing phase in level walking. VL and Gastro muscular activation had similar patterns in both side. The ability may transfer to those needed during loading response, mid-stance and terminal swing phase. User needed to make more effort in stepping compared with walking with similar timing; thus the strengthening of the VL and Gastro may be helpful to improve the walking endurance and efficiency for the elderly.Keywords: elderly, electromyography, seated stepping, walking
Procedia PDF Downloads 22125259 Local Spectrum Feature Extraction for Face Recognition
Authors: Muhammad Imran Ahmad, Ruzelita Ngadiran, Mohd Nazrin Md Isa, Nor Ashidi Mat Isa, Mohd ZaizuIlyas, Raja Abdullah Raja Ahmad, Said Amirul Anwar Ab Hamid, Muzammil Jusoh
Abstract:
This paper presents two technique, local feature extraction using image spectrum and low frequency spectrum modelling using GMM to capture the underlying statistical information to improve the performance of face recognition system. Local spectrum features are extracted using overlap sub block window that are mapping on the face image. For each of this block, spatial domain is transformed to frequency domain using DFT. A low frequency coefficient is preserved by discarding high frequency coefficients by applying rectangular mask on the spectrum of the facial image. Low frequency information is non Gaussian in the feature space and by using combination of several Gaussian function that has different statistical properties, the best feature representation can be model using probability density function. The recognition process is performed using maximum likelihood value computed using pre-calculate GMM components. The method is tested using FERET data sets and is able to achieved 92% recognition rates.Keywords: local features modelling, face recognition system, Gaussian mixture models, Feret
Procedia PDF Downloads 66725258 Emotion Recognition Using Artificial Intelligence
Authors: Rahul Mohite, Lahcen Ouarbya
Abstract:
This paper focuses on the interplay between humans and computer systems and the ability of these systems to understand and respond to human emotions, including non-verbal communication. Current emotion recognition systems are based solely on either facial or verbal expressions. The limitation of these systems is that it requires large training data sets. The paper proposes a system for recognizing human emotions that combines both speech and emotion recognition. The system utilizes advanced techniques such as deep learning and image recognition to identify facial expressions and comprehend emotions. The results show that the proposed system, based on the combination of facial expression and speech, outperforms existing ones, which are based solely either on facial or verbal expressions. The proposed system detects human emotion with an accuracy of 86%, whereas the existing systems have an accuracy of 70% using verbal expression only and 76% using facial expression only. In this paper, the increasing significance and demand for facial recognition technology in emotion recognition are also discussed.Keywords: facial reputation, expression reputation, deep gaining knowledge of, photo reputation, facial technology, sign processing, photo type
Procedia PDF Downloads 12125257 A Method to Evaluate and Compare Web Information Extractors
Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman
Abstract:
Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.Keywords: web information extractors, information extraction evaluation method, Google scholar, web
Procedia PDF Downloads 24825256 Identification of Candidate Congenital Heart Defects Biomarkers by Applying a Random Forest Approach on DNA Methylation Data
Authors: Kan Yu, Khui Hung Lee, Eben Afrifa-Yamoah, Jing Guo, Katrina Harrison, Jack Goldblatt, Nicholas Pachter, Jitian Xiao, Guicheng Brad Zhang
Abstract:
Background and Significance of the Study: Congenital Heart Defects (CHDs) are the most common malformation at birth and one of the leading causes of infant death. Although the exact etiology remains a significant challenge, epigenetic modifications, such as DNA methylation, are thought to contribute to the pathogenesis of congenital heart defects. At present, no existing DNA methylation biomarkers are used for early detection of CHDs. The existing CHD diagnostic techniques are time-consuming and costly and can only be used to diagnose CHDs after an infant was born. The present study employed a machine learning technique to analyse genome-wide methylation data in children with and without CHDs with the aim to find methylation biomarkers for CHDs. Methods: The Illumina Human Methylation EPIC BeadChip was used to screen the genome‐wide DNA methylation profiles of 24 infants diagnosed with congenital heart defects and 24 healthy infants without congenital heart defects. Primary pre-processing was conducted by using RnBeads and limma packages. The methylation levels of top 600 genes with the lowest p-value were selected and further investigated by using a random forest approach. ROC curves were used to analyse the sensitivity and specificity of each biomarker in both training and test sample sets. The functionalities of selected genes with high sensitivity and specificity were then assessed in molecular processes. Major Findings of the Study: Three genes (MIR663, FGF3, and FAM64A) were identified from both training and validating data by random forests with an average sensitivity and specificity of 85% and 95%. GO analyses for the top 600 genes showed that these putative differentially methylated genes were primarily associated with regulation of lipid metabolic process, protein-containing complex localization, and Notch signalling pathway. The present findings highlight that aberrant DNA methylation may play a significant role in the pathogenesis of congenital heart defects.Keywords: biomarker, congenital heart defects, DNA methylation, random forest
Procedia PDF Downloads 15825255 A Study on the Magnetic and Submarine Geology Structure of TA22 Seamount in Lau Basin, Tonga
Authors: Soon Young Choi, Chan Hwan Kim, Chan Hong Park, Hyung Rae Kim, Myoung Hoon Lee, Hyeon-Yeong Park
Abstract:
We performed the marine magnetic, bathymetry and seismic survey at the TA22 seamount (in the Lau basin, SW Pacific) for finding the submarine hydrothermal deposits in October 2009. We acquired magnetic and bathymetry data sets by suing Overhouser Proton Magnetometer SeaSPY (Marine Magnetics Co.), Multi-beam Echo Sounder EM120 (Kongsberg Co.). We conducted the data processing to obtain detailed seabed topography, magnetic anomaly, reduction to the pole (RTP) and magnetization. Based on the magnetic properties result, we analyzed submarine geology structure of TA22 seamount with post-processed seismic profile. The detailed bathymetry of the TA22 seamount showed the left and right crest parts that have caldera features in each crest central part. The magnetic anomaly distribution of the TA22 seamount regionally displayed high magnetic anomalies in northern part and the low magnetic anomalies in southern part around the caldera features. The RTP magnetic anomaly distribution of the TA22 seamount presented commonly high magnetic anomalies in the each caldera central part. Also, it represented strong anomalies at the inside of caldera rather than outside flank of the caldera. The magnetization distribution of the TA22 seamount showed the low magnetization zone in the center of each caldera, high magnetization zone in the southern and northern east part. From analyzed the seismic profile map, The TA22 seamount area is showed for the inferred small mounds inside each caldera central part and it assumes to make possibility of sills by the magma in cases of the right caldera. Taking into account all results of this study (bathymetry, magnetic anomaly, RTP, magnetization, seismic profile) with rock samples at the left caldera area in 2009 survey, we suppose the possibility of hydrothermal deposits at mounds in each caldera central part and at outside flank of the caldera representing the low magnetization zone. We expect to have the better results by combined modeling from this study data with the other geological data (ex. detailed gravity, 3D seismic, petrologic study results and etc).Keywords: detailed bathymetry, magnetic anomaly, seamounts, seismic profile, SW Pacific
Procedia PDF Downloads 40225254 The Acquisition of /r/ By Setswana-Learning Children
Authors: Keneilwe Matlhaku
Abstract:
Crosslinguistic studies (theoretical and clinical) have shown delays and significant misarticulation in the acquisition of the rhotics. This article provides a detailed analysis of the early development of the rhotic phoneme, an apical trill /r/, by monolingual Setswana (Tswana S30) children of age ranges between 1 and 4 years. The data display the following trends: (1) late acquisition of /r/; (2) a wide range of substitution patterns involving this phoneme (i.e., gliding, coronal stopping, affrication, deletion, lateralization, as well as, substitution to a dental and uvular fricative). The primary focus of the article is on the potential origins of these variations of /r/, even within the same language. Our data comprises naturalistic longitudinal audio recordings of 6 children (2 males and 4 females) whose speech was recorded in their homes over a period of 4 months with no or only minimal disruptions in their daily environments. Phon software (Rose et al. 2013; Rose & MacWhinney 2014) was used to carry out the orthographic and phonetic transcriptions of the children’s data. Phon also enabled the generation of the children’s phonological inventories for comparison with adult target IPA forms. We explain the children’s patterns through current models of phonological emergence (MacWhinney 2015) as well as McAllister Byun, Inkelas & Rose (2016); Rose et al., (2022), which highlight the perceptual and articulatory factors influencing the development of sounds and sound classes. We highlight how the substitution patterns observed in the data can be captured through a consideration of the auditory properties of the target speech sounds, combined with an understanding of the types of articulatory gestures involved in the production of these sounds. These considerations, in turn, highlight some of the most central aspects of the challenges faced by the child toward learning these auditory-articulatory mappings. We provide a cross-linguistic survey of the acquisition of rhotic consonants in a sample of related and unrelated languages in which we show that the variability and volatility in the substitution patterns of /r/ is also brought about by the properties of the children’s ambient languages. Beyond theoretical issues, this article sets an initial foundation for developing speech-language pathology materials and services for Setswana learning children, an emerging area of public service in Botswana.Keywords: rhotic, apical trill, Phon, phonological emergence, auditory, articulatory, mapping
Procedia PDF Downloads 3825253 Defect Correlation of Computed Tomography and Serial Sectioning in Additively Manufactured Ti-6Al-4V
Authors: Bryce R. Jolley, Michael Uchic
Abstract:
This study presents initial results toward the correlative characterization of inherent defects of Ti-6Al-4V additive manufacture (AM). X-Ray Computed Tomography (CT) defect data are compared and correlated with microscopic photographs obtained via automated serial sectioning. The metal AM specimen was manufactured out of Ti-6Al-4V virgin powder to specified dimensions. A post-contour was applied during the fabrication process with a speed of 1050 mm/s, power of 260 W, and a width of 140 µm. The specimen was stress relief heat-treated at 16°F for 3 hours. Microfocus CT imaging was accomplished on the specimen within a predetermined region of the build. Microfocus CT imaging was conducted with parameters optimized for Ti-6Al-4V additive manufacture. After CT imaging, a modified RoboMet. 3D version 2 was employed for serial sectioning and optical microscopy characterization of the same predetermined region. Automated montage capture with sub-micron resolution, bright-field reflection, 12-bit monochrome optical images were performed in an automated fashion. These optical images were post-processed to produce 2D and 3D data sets. This processing included thresholding and segmentation to improve visualization of defect features. The defects observed from optical imaging were compared and correlated with the defects observed from CT imaging over the same predetermined region of the specimen. Quantitative results of area fraction and equivalent pore diameters obtained via each method are presented for this correlation. It is shown that Microfocus CT imaging does not capture all inherent defects within this Ti-6Al-4V AM sample. Best practices for this correlative effort are also presented as well as the future direction of research resultant from this current study.Keywords: additive manufacture, automated serial sectioning, computed tomography, nondestructive evaluation
Procedia PDF Downloads 141