Search results for: type classification
7628 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals
Authors: Christine F. Boos, Fernando M. Azevedo
Abstract:
Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing
Procedia PDF Downloads 5287627 Comparison for Some Elastic and Mechanical Properties of Plutonium Dioxide
Abstract:
We report some elastic parameters of cubic fluorite type neptunium dioxide (NpO2) with a recent EAM type interatomic potential through geometry optimization calculations. Typical cubic elastic constants, bulk modulus, shear modulus, young modulus and other relevant elastic parameters were also calculated during research. After calculations, we have compared our results with the available theoretical data. Our results agree well with the previous theoretical findings of the considered quantities of NpO2.Keywords: NpO2, elastic properties, bulk modulus, mechanical properties
Procedia PDF Downloads 3377626 Effects of Resistance Exercise Training on Blood Profile and CRP in Men with Type 2 Diabetes Mellitus
Authors: Mohsen Salesi, Seyyed Zoheir Rabei
Abstract:
Exercise has been considered a cornerstone of diabetes prevention and treatment for decades, but the benefits of resistance training are less clear. The purpose of this study was to determine the impact of resistance training on blood profile and inflammatory marker (CRP) of type 2 diabetes mellitus people. Thirty diabetic male were recruited (age: 50.34±10.28 years) and randomly assigned to 8 weeks resistance exercise training (n=15) and control groups (n=15). Before and after training blood pressure, weight, lipid profile (TC, TG, LDL-c, and HDL-c) and hs-CRP were measured. The resistance exercise training group took part in supervised 50–80 minutes resistance training sessions, three days a week on non-consecutive days for 8 weeks. Each exercise session included approximately 10 min of warm-up and cool-down periods. Results showed that TG significantly decreased (pre 210.19±9.31 vs. 101.12±7.25, p=0.03) and HDL-c significantly increased (pre 42.37±3.15 vs. 47.50±2.19, p=0.01) after exercise training. However, there was no difference between groups in TC, LDL-c, BMI and weight. In addition, a decrease in fasting blood glucose levels showed significant difference between groups (pre 144.65±5.73 vs. 124.21±6.48 p=0.04). Regular resistance exercise training can improve the lipid profile and reducing the cardiovascular risk factors in T2DM patients.Keywords: lipid profile, resistance exercise, type 2 diabetes mellitus, men
Procedia PDF Downloads 4147625 [Keynote Talk]: Treatment Satisfaction and Safety of Sitagliptin versus Pioglitazone in Patients with Type 2 Diabetes Mellitus Inadequately Controlled on Metformin Monotherapy
Authors: Shahnaz Haque, Anand Shukla, Sunita Singh, Anil Kem
Abstract:
Introduction: Diabetes Mellitus is a chronic metabolic disease affecting millions worldwide. Metformin is the most commonly prescribed first line oral hypoglycemic drug for type 2 diabetes mellitus, but due to progressive worsening of blood glucose control during the natural history of type 2 diabetes, combination therapy usually becomes necessary. Objective: This study was designed to assess the treatment satisfaction between Sitagliptin versus Pioglitazone added to Metformin in patients with type 2 diabetes mellitus (T2DM). Methods: We conducted a prospective, open label, randomized, parallel group study in SIMS, Hapur, U.P. Eligible patients fulfilling inclusion criteria were randomized into two groups having 25 patients in each group using tab Sitagliptin 100mg, tab Pioglitazone 30mg added to ongoing tab Metformin (500mg) therapy for 16 weeks. The follow-up visits were on weeks 4,12 and 16. Result: 16 weeks later, addition of Sitagliptin 100mg compared to that of Pioglitazone 30 mg to ongoing Metformin therapy provided similar glycosylated hemoglobin (HbA1c) lowering efficacy in patients with T2DM with inadequate glycemic control on metformin monotherapy. Change in HbA1c in group1 was -0.656±0.21%(p<0.0001) whereas in group2 was -0.748±0.35%(p<0.0001). Hence decrease in HbA1c from baseline was more in group2. Both treatments were well tolerated with negligible risk of hypoglycaemia. Weight loss was observed with Sitagliptin in contrast to weight gain seen in Pioglitazone. Conclusion: In this study, Sitagliptin 100 mg along with metformin therapy in comparison to pioglitazone 30 mg plus metformin therapy was both effective, well-tolerated and improved glycemic control in both the groups. Addition of pioglitazone had cause oedema and weight gain to the patients whereas sitagliptin caused weight loss in its patients.Keywords: sitagliptin, pioglitazone, metformin, type 2 diabetes mellitus
Procedia PDF Downloads 3037624 Roughness Discrimination Using Bioinspired Tactile Sensors
Authors: Zhengkun Yi
Abstract:
Surface texture discrimination using artificial tactile sensors has attracted increasing attentions in the past decade as it can endow technical and robot systems with a key missing ability. However, as a major component of texture, roughness has rarely been explored. This paper presents an approach for tactile surface roughness discrimination, which includes two parts: (1) design and fabrication of a bioinspired artificial fingertip, and (2) tactile signal processing for tactile surface roughness discrimination. The bioinspired fingertip is comprised of two polydimethylsiloxane (PDMS) layers, a polymethyl methacrylate (PMMA) bar, and two perpendicular polyvinylidene difluoride (PVDF) film sensors. This artificial fingertip mimics human fingertips in three aspects: (1) Elastic properties of epidermis and dermis in human skin are replicated by the two PDMS layers with different stiffness, (2) The PMMA bar serves the role analogous to that of a bone, and (3) PVDF film sensors emulate Meissner’s corpuscles in terms of both location and response to the vibratory stimuli. Various extracted features and classification algorithms including support vector machines (SVM) and k-nearest neighbors (kNN) are examined for tactile surface roughness discrimination. Eight standard rough surfaces with roughness values (Ra) of 50 μm, 25 μm, 12.5 μm, 6.3 μm 3.2 μm, 1.6 μm, 0.8 μm, and 0.4 μm are explored. The highest classification accuracy of (82.6 ± 10.8) % can be achieved using solely one PVDF film sensor with kNN (k = 9) classifier and the standard deviation feature.Keywords: bioinspired fingertip, classifier, feature extraction, roughness discrimination
Procedia PDF Downloads 3127623 Simulated Microgravity Inhibits L-Type Calcium Channel Currents by Up-Regulation of miR-103 in Osteoblasts
Authors: Zhongyang Sun, Shu Zhang
Abstract:
In osteoblasts, L-type voltage sensitive calcium channels (LTCCs), especially the Cav1.2 LTCCs, play fundamental roles in cellular responses to external stimuli including both mechanical forces and hormonal signals. Several lines of evidence have revealed that the density of bone is increased and the resorption of bone is decreased when these calcium channels in osteoblasts are activated. And numerous studies have shown that mechanical loading promotes bone formation in the modeling skeleton, whereas removal of this stimulus in microgravity results in a reduction in bone mass. However, the effect of microgravity on LTCCs in osteoblasts is still unknown. The aim of this study was to determine whether microgravity exerts influence on LTCCs in osteoblasts and the possible mechanisms underlying. In this study, we demonstrate that simulated microgravity substantially inhibits LTCCs in osteoblast by suppressing the expression of Cav1.2. Then we show that the up-regulation of miR-103 is involved in the down-regulation of Cav1.2 expression and inhibition of LTCCs by simulated microgravity in osteoblasts. Our study provides a novel mechanism of simulated microgravity-induced adverse effects on osteoblasts, offering a new avenue to further investigate the bone loss caused by microgravity.Keywords: L-type voltage sensitive calcium channels, Cav1.2, osteoblasts, microgravity
Procedia PDF Downloads 3067622 Dimensional Investigation of Food Addiction in Individuals Who Have Undergone Bariatric Surgery
Authors: Ligia Florio, João Mauricio Castaldelli-Maia
Abstract:
Background: Food addiction (FA) emerged in the 1990s as a possible contributor to the increasing prevalence of obesity and overweight, in conjunction with changing food environments and mental health conditions. However, FA is not yet listed as one of the disorders in the DSM-5 and/or the ICD-11. Although there are controversies and debates in the literature about the classification and construct of FA, the most common approach to access it is the use of a research tool - the Yale Food Addiction Scale (YFAS) - which approximates the concept of FA to the concept diagnosis of dependence on psychoactive substances. There is a need to explore the dimensional phenotypes accessed by YFAS in different population groups for a better understanding and scientific support of FA diagnoses. Methods: The primary objective of this project was to investigate the construct validity of the FA concept by mYFAS 2.0 in individuals who underwent bariatric surgery (n = 100) at the Hospital Estadual Mário Covas since 2011. Statistical analyzes were conducted using the STATA software. In this sense, structural or factor validity was the type of construct validity investigated using exploratory factor analysis (EFA) and item response theory (IRT) techniques. Results: EFA showed that the one-dimensional model was the most parsimonious. The IRT showed that all criteria contributed to the latent structure, presenting discrimination values greater than 0.5, with most presenting values greater than 2. Conclusion: This study reinforces a FA dimension in patients who underwent bariatric surgery. Within this dimension, we identified the most severe and discriminating criteria for the diagnosis of FA.Keywords: obesity, food addiction, bariatric surgery, regain
Procedia PDF Downloads 767621 Influence of Alcohol to Quality Iota Type Carrageenan
Authors: Andi Hasizah Mochtar, Meta Mahendradatta, Amran Laga, Metusalach Metusalach, Salengke Salengke, Mariati Bilang, Andi Amijoyo Mochtar, Reta Reta, Aminah Muhdar, Sri Suhartini
Abstract:
This study aims to determine the effect of alcohol type on the quality of iota carrageenan-based on extraction technology through the application of ohmic reactor. Results of this analysis will be used as a reference for selecting the proper type of alcohol used for carrageenan precipitated after extraction by technology based ohmic. The results of analysis performed included analysis of viscosity, gel strength, and yield of iota carrageenan. Viscosity is the highest obtained at precipitated by using isopropyl alcohol with an average of 291.5 Cp (at 160 rpm), then with methanol at an average of 282 Cp, then precipitated by using ethanol at an average of 206.5 Cp. Gel strength is the lowest obtained 67.74 on precipitated by using ethanol, then an average of 74.34 in precipitated that using methanol, and the highest average of 80.11 in precipitated that using isopropyl alcohol.Keywords: extraction of carrageenan, gel strength, ohmic technology, precipitated, seaweed (Eucheuma spinosum), viscosity
Procedia PDF Downloads 2217620 Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea
Authors: Kyomin Lee, Joohee Kim, Sangho Kang
Abstract:
The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.Keywords: characterization, classification, decommissioning, decontamination and dismantling, Kori 1, radioactive waste
Procedia PDF Downloads 2097619 Sub-Pixel Mapping Based on New Mixed Interpolation
Authors: Zeyu Zhou, Xiaojun Bi
Abstract:
Due to the limited environmental parameters and the limited resolution of the sensor, the universal existence of the mixed pixels in the process of remote sensing images restricts the spatial resolution of the remote sensing images. Sub-pixel mapping technology can effectively improve the spatial resolution. As the bilinear interpolation algorithm inevitably produces the edge blur effect, which leads to the inaccurate sub-pixel mapping results. In order to avoid the edge blur effect that affects the sub-pixel mapping results in the interpolation process, this paper presents a new edge-directed interpolation algorithm which uses the covariance adaptive interpolation algorithm on the edge of the low-resolution image and uses bilinear interpolation algorithm in the low-resolution image smooth area. By using the edge-directed interpolation algorithm, the super-resolution of the image with low resolution is obtained, and we get the percentage of each sub-pixel under a certain type of high-resolution image. Then we rely on the probability value as a soft attribute estimate and carry out sub-pixel scale under the ‘hard classification’. Finally, we get the result of sub-pixel mapping. Through the experiment, we compare the algorithm and the bilinear algorithm given in this paper to the results of the sub-pixel mapping method. It is found that the sub-pixel mapping method based on the edge-directed interpolation algorithm has better edge effect and higher mapping accuracy. The results of the paper meet our original intention of the question. At the same time, the method does not require iterative computation and training of samples, making it easier to implement.Keywords: remote sensing images, sub-pixel mapping, bilinear interpolation, edge-directed interpolation
Procedia PDF Downloads 2297618 Prediction of the Mechanical Power in Wind Turbine Powered Car Using Velocity Analysis
Authors: Abdelrahman Alghazali, Youssef Kassem, Hüseyin Çamur, Ozan Erenay
Abstract:
Savonius is a drag type vertical axis wind turbine. Savonius wind turbines have a low cut-in speed and can operate at low wind speed. This makes it suitable for electricity or mechanical generation in low-power applications such as individual domestic installations. Therefore, the primary purpose of this work was to investigate the relationship between the type of Savonius rotor and the torque and mechanical power generated. And it was to illustrate how the type of rotor might play an important role in the prediction of mechanical power of wind turbine powered car. The main purpose of this paper is to predict and investigate the aerodynamic effects by means of velocity analysis on the performance of a wind turbine powered car by converting the wind energy into mechanical energy to overcome load that rotates the main shaft. The predicted results based on theoretical analysis were compared with experimental results obtained from literature. The percentage of error between the two was approximately around 20%. Prediction of the torque was done at a wind speed of 4 m/s, and an angular velocity of 130 RPM according to meteorological statistics in Northern Cyprus.Keywords: mechanical power, torque, Savonius rotor, wind car
Procedia PDF Downloads 3377617 Paradigm Shift in Classical Drug Research: Challenges to Mordern Pharmaceutical Sciences
Authors: Riddhi Shukla, Rajeshri Patel, Prakruti Buch, Tejas Sharma, Mihir Raval, Navin Sheth
Abstract:
Many classical drugs are claimed to have blood sugar lowering properties that make them valuable for people with or at high risk of type 2 diabetes. Vijaysar (Pterocarpus marsupium) and Gaumutra (Indian cow urine) both have been shown antidiabetic property since primordial time and both shows synergistic effect in combination for hypoglycaemic activity. The study was undertaken to investigate the hypoglycaemic and anti-diabetic effects of the combination of Vijaysar and Gaumutra which is a classical preparation mentioned in Ayurveda named as Pramehari ark. Rats with Type 2 diabetes which is induced by streptozotocin (STZ, 35mg/kg) given a high-fat diet for one month and compared with normal rats. Diabetic rats showed raised level of body weight, triglyceride (TG), total cholesterol, HDL, LDL, and D-glucose concentration and other serum, cardiac and hypertrophic parameters in comparison of normal rats. After treatment of different doses of drug the level of parameters like TG, total cholesterol, HDL, LDL, and D-glucose concentration found to be decreased in standard as well as in treatment groups. In addition treatment groups also found to be decreased in the level of serum markers, cardiac markers, and hypertrophic parameters. The findings demonstrated that Pramehari ark prevented the pathological progression of type 2 diabetes in rats.Keywords: cow urine, hypoglycemic effect, synergic effect, type 2 diabetes, vijaysar
Procedia PDF Downloads 2797616 Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features
Authors: Kyi Pyar Zaw, Zin Mar Kyu
Abstract:
Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters.Keywords: chain code frequency, character recognition, feature extraction, features matching, segmentation
Procedia PDF Downloads 3207615 Partial Least Square Regression for High-Dimentional and High-Correlated Data
Authors: Mohammed Abdullah Alshahrani
Abstract:
The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data
Procedia PDF Downloads 497614 Experimental Investigation of Nanofluid Heat Transfer in a Plate Type Heat Exchanger
Authors: Eyuphan Manay
Abstract:
In this study, it was aimed to determine the convective heat transfer characteristics of water-based silicon dioxide nanofluids (SiO₂) with particle volume fractions of 0.2 and 0.4% vol. Nanofluids were tested in a plate type heat exchanger with six plates. Plate type heat exchanger was manufactured from stainless steel. Water was driven in the hot flow side, and nanofluids were driven in the cold flow side. The thermal energy of the hot water was taken by nanofluids. Effect of the inlet temperature of the hot water was investigated on heat transfer performance of the nanofluids while the inlet temperature of the nanofluids was fixed. In addition, the effects of the particle volume fraction and the cold flow rate on the performance of the system were tested. Results showed that increasing inlet temperature of the hot flow caused heat transfer to enhance. The suspended solid particles into the carrier fluid also remarkably enhanced heat transfer, and, an increase in the particle volume fraction resulted in an increase in heat transfer.Keywords: heat transfer enhancement, SiO₂-water, nanofluid, plate heat exchanger
Procedia PDF Downloads 2037613 Iraqi Short Term Electrical Load Forecasting Based on Interval Type-2 Fuzzy Logic
Authors: Firas M. Tuaimah, Huda M. Abdul Abbas
Abstract:
Accurate Short Term Load Forecasting (STLF) is essential for a variety of decision making processes. However, forecasting accuracy can drop due to the presence of uncertainty in the operation of energy systems or unexpected behavior of exogenous variables. Interval Type 2 Fuzzy Logic System (IT2 FLS), with additional degrees of freedom, gives an excellent tool for handling uncertainties and it improved the prediction accuracy. The training data used in this study covers the period from January 1, 2012 to February 1, 2012 for winter season and the period from July 1, 2012 to August 1, 2012 for summer season. The actual load forecasting period starts from January 22, till 28, 2012 for winter model and from July 22 till 28, 2012 for summer model. The real data for Iraqi power system which belongs to the Ministry of Electricity.Keywords: short term load forecasting, prediction interval, type 2 fuzzy logic systems, electric, computer systems engineering
Procedia PDF Downloads 3977612 Spatial Integration at the Room-Level of 'Sequina' Slum Area in Alexandria, Egypt
Authors: Ali Essam El Shazly
Abstract:
The slum survey of 'Sequina' area in Alexandria details the building rooms of twenty-building samples according to the integral measure of space syntax. The essence of room organization sets the most integrative 'visitor' domain between the 'inhabitant' wings of less integrated 'parent' than the 'children' structure with visual ring of 'balcony' space. Despite the collective real relative asymmetry of 'pheno-type' aggregation, the relative asymmetry of individual layouts reveals 'geno-type' structure of spatial diversity. The multifunction of rooms optimizes the integral structure of graph and visibility merge, which contrasts with the deep tailing structure of distinctive social domains. The most integrative layout inverts the geno-type into freed rooms of shallow 'inhabitant' domain against the off-centered 'visitor' space, while the most segregated layout further restricts the pheno-type through isolated 'visitor' from 'inhabitant' domains across the 'staircase' public domain. The catalyst 'kitchen & living' spaces demonstrate multi-structural dimensions among the various social domains. The former ranges from most exposed central integrity to the most hidden 'motherhood' territories. The latter, however, mostly integrates at centrality or at the further ringy 'childern' domain. The study concludes social structure of spatial integrity for redevelopment, which is determined through the micro-level survey of rooms with integral dimensions.Keywords: Alexandria, Sequina slum, spatial integration, space syntax
Procedia PDF Downloads 4387611 ICAM-2, A Protein of Antitumor Immune Response in Mekong Giant Catfish (Pangasianodon gigas)
Authors: Jiraporn Rojtinnakorn
Abstract:
ICAM-2 (intercellular adhesion molecule 2) or CD102 (Cluster of Differentiation 102) is type I trans-membrane glycoproteins, composing 2-9 immunoglobulin-like C2-type domains. ICAM-2 plays the particular role in immune response and cell surveillance. It is concerned in innate and specific immunity, cell survival signal, apoptosis, and anticancer. EST clone of ICAM-2, from P. gigas blood cell EST libraries, showed high identity to human ICAM-2 (92%) with conserve region of ICAM N-terminal domain and part of Ig superfamily. Gene and protein of ICAM-2 has been founded in mammals. This is the first report of ICAM-2 in fish.Keywords: ICAM-2, CD102, Pangasianodon gigas, antitumor
Procedia PDF Downloads 2267610 Temperature Investigations in Two Type of Crimped Connection Using Experimental Determinations
Authors: C. F. Ocoleanu, A. I. Dolan, G. Cividjian, S. Teodorescu
Abstract:
In this paper we make a temperature investigations in two type of superposed crimped connections using experimental determinations. All the samples use 8 copper wire 7.1 x 3 mm2 crimped by two methods: the first method uses one crimp indents and the second is a proposed method with two crimp indents. The ferrule is a parallel one. We study the influence of number and position of crimp indents. The samples are heated in A.C. current at different current values until steady state heating regime. After obtaining of temperature values, we compare them and present the conclusion.Keywords: crimped connections, experimental determinations, temperature, heat transfer
Procedia PDF Downloads 2707609 Classification of Multiple Cancer Types with Deep Convolutional Neural Network
Authors: Nan Deng, Zhenqiu Liu
Abstract:
Thousands of patients with metastatic tumors were diagnosed with cancers of unknown primary sites each year. The inability to identify the primary cancer site may lead to inappropriate treatment and unexpected prognosis. Nowadays, a large amount of genomics and transcriptomics cancer data has been generated by next-generation sequencing (NGS) technologies, and The Cancer Genome Atlas (TCGA) database has accrued thousands of human cancer tumors and healthy controls, which provides an abundance of resource to differentiate cancer types. Meanwhile, deep convolutional neural networks (CNNs) have shown high accuracy on classification among a large number of image object categories. Here, we utilize 25 cancer primary tumors and 3 normal tissues from TCGA and convert their RNA-Seq gene expression profiling to color images; train, validate and test a CNN classifier directly from these images. The performance result shows that our CNN classifier can archive >80% test accuracy on most of the tumors and normal tissues. Since the gene expression pattern of distant metastases is similar to their primary tumors, the CNN classifier may provide a potential computational strategy on identifying the unknown primary origin of metastatic cancer in order to plan appropriate treatment for patients.Keywords: bioinformatics, cancer, convolutional neural network, deep leaning, gene expression pattern
Procedia PDF Downloads 2997608 Organic Geochemical Evaluation of the Ecca Group Shale: Implications for Hydrocarbon Potential
Authors: Temitope L. Baiyegunhi, Kuiwu Liu, Oswald Gwavava, Christopher Baiyegunhi
Abstract:
Shale gas has recently been the exploration focus for future energy resource in South Africa. Specifically, the black shales of the lower Ecca Group in the study area are considered to be one of the most prospective targets for shale gas exploration. Evaluation of this potential resource has been restricted due to the lack of exploration and scarcity of existing drill core data. Thus, only limited previous geochemical data exist for these formations. In this study, outcrop and core samples of the Ecca Group were analysed to assess their total organic carbon (TOC), organic matter type, thermal maturity and hydrocarbon generation potential (SP). The results show that these rocks have TOC ranging from 0.11 to 7.35 wt.%. The SP values vary from 0.09 to 0.53 mg HC/g, suggesting poor hydrocarbon generative potential. The plot of S1 versus TOC shows that the source rocks were characterized by autochthonous hydrocarbons. S2/S3 values range between 0.40 and 7.5, indicating Type- II/III, III, and IV kerogen. With the exception of one sample from the collingham formation which has HI value of 53 mg HC/g TOC, all other samples have HI values of less than 50 mg HC/g TOC, thus suggesting Type-IV kerogen, which is mostly derived from reworked organic matter (mainly dead carbon) with little or no potential for hydrocarbon generation. Tmax values range from 318 to 601℃, indicating immature to over-maturity of hydrocarbon. The vitrinite reflectance values range from 2.22 to 3.93%, indicating over-maturity of the kerogen. Binary plots of HI against OI and HI versus Tmax show that the shales are of Type II and mixed Type II-III kerogen, which are capable of generating both natural gas and minor oil at suitable burial depth. Based on the geochemical data, it can be inferred that the source rocks are immature to over-matured variable from localities and have potential of producing wet to dry gas at present-stage. Generally, the Whitehill formation of the Ecca Group is comparable to the Marcellus and Barnett Shales. This further supports the assumption that the Whitehill Formation has a high probability of being a profitable shale gas play, but only when explored in dolerite-free area and away from the Cape Fold Belt.Keywords: source rock, organic matter type, thermal maturity, hydrocarbon generation potential, Ecca Group
Procedia PDF Downloads 1427607 Semantic Differences between Bug Labeling of Different Repositories via Machine Learning
Authors: Pooja Khanal, Huaming Zhang
Abstract:
Labeling of issues/bugs, also known as bug classification, plays a vital role in software engineering. Some known labels/classes of bugs are 'User Interface', 'Security', and 'API'. Most of the time, when a reporter reports a bug, they try to assign some predefined label to it. Those issues are reported for a project, and each project is a repository in GitHub/GitLab, which contains multiple issues. There are many software project repositories -ranging from individual projects to commercial projects. The labels assigned for different repositories may be dependent on various factors like human instinct, generalization of labels, label assignment policy followed by the reporter, etc. While the reporter of the issue may instinctively give that issue a label, another person reporting the same issue may label it differently. This way, it is not known mathematically if a label in one repository is similar or different to the label in another repository. Hence, the primary goal of this research is to find the semantic differences between bug labeling of different repositories via machine learning. Independent optimal classifiers for individual repositories are built first using the text features from the reported issues. The optimal classifiers may include a combination of multiple classifiers stacked together. Then, those classifiers are used to cross-test other repositories which leads the result to be deduced mathematically. The produce of this ongoing research includes a formalized open-source GitHub issues database that is used to deduce the similarity of the labels pertaining to the different repositories.Keywords: bug classification, bug labels, GitHub issues, semantic differences
Procedia PDF Downloads 2007606 Critical Thinking Index of College Students
Authors: Helen Frialde-Dupale
Abstract:
Critical thinking Index (CTI) of 150 third year college students from five State Colleges and Universities (SUCs) in Region I were determined. Only students with Grade Point Average (GPA) of at least 2.0 from four general classification of degree courses, namely: Education, Arts and Sciences, Engineering and Agriculture were included. Specific problem No.1 dealt with the profile variables, namely: age, sex, degree course, monthly family income, number of siblings, high school graduated from, grade point average, personality type, highest educational attainment of parents, and occupation of parents. Problem No. 2 determined the critical thinking index among the respondents. Problem No. 3 investigated whether or not there are significant differences in the critical thinking index among the respondents across the profile variables. While problem No.4 determined whether or not there are significant relationship between the critical thinking index and selected profile variables, namely: age, monthly family income, number of siblings, and grade point average of the respondents. Finally, on problem No. 5, the critical thinking instrument which obtained the lowest rates, were used as basis for outlining an intervention program for enhancing critical thinking index (CTI) of students. The following null hypotheses were tested at 0.05 level of significance: there are no significant differences in the critical thinking index of the third college students across the profile variables; there are no significant relationships between the critical thinking index of the respondents and selected variables, namely: age, monthly family income, number of siblings, and grade point average.Keywords: attitude as critical thinker, critical thinking applied, critical thinking index, self-perception as critical thinker
Procedia PDF Downloads 5177605 Fluid Inclusions Analysis of Fluorite from the Hammam Jedidi District, North-Eastern Tunisia
Authors: Miladi Yasmine, Bouhlel Salah, Garnit Hechmi
Abstract:
Hydrothermal vein-type deposits of the Hammam Jedidi F-Ba(Pb-Zn-Cu) are hosted in Lower Jurassic, Cretaceous and Tertiary series, and located near a very important structural lineament (NE-SW) corresponding to the Hammam Jedidi Fault in the Tunisian Dorsale. The circulation of the ore forming fluid is triggered by a regional tectonic compressive phase which occurred during the miocène time. Mineralization occurs as stratabound and vein-type orebodies adjacent to the Triassic salt diapirs and within fault in Jurassic limestone. Fluid inclusions data show that two distinct fluids were involved in the mineralisation deposition: a warmer saline fluid (180°C, 20 wt % NaCl equivalent) and cooler less saline fluid (126°C, 5wt%NaCl equivalent). The contrasting salinities and halogen ratios suggest that this two fluid derived from one of the brine originated after the dissolution of halite as suggested by its high salinity. The other end member, as indicated by the low Cl/Br ratios, acquired its low salinity by dilution of Br enriched evaporated seawater. These results are compatible with Mississippi-Valley- type mineralization.Keywords: Jebel Oust, fluid inclusions, North Eastern Tunisia, mineralization
Procedia PDF Downloads 3417604 Effect of Satureja khuzestanica Jamzad Supplementation on Inflammatory and Antioxidant Indicators in Type 2 Diabetes Patients: A Randomized Controlled Clinical Trial Study
Authors: Maryam Bordbar, Yaser Mokhayeri, Sajjad Roosta, Fatemeh Ghasemi, Saeed Choobkar, Hamidreza Nikbakht, Ebrahim Falahi
Abstract:
Objective: Diabetes mellitus type 2 is the most common metabolic disorder that is growing exponentially worldwide. Satureja Khuzestanica Jamzad is a native plant of Iran that grows widely in the south of Iran. Its antimicrobial, antioxidant, anti-inflammatory and pain-relieving effects have been documented in animal studies. The purpose of this study is to investigate the effect of consumption daily S. khuzestanica on inflammatory and antioxidant indicators in type 2 diabetic patients. Methods and Materials: In a double-blind, placebo-controlled clinical trial, 67 patients with type 2 diabetes were included and divided into two groups. One group received S. khuzestanica (capsule containing 500 mg) and the other group received placebo (500 mg talcum powder) once a day for 12 weeks. After the intervention, the inflammatory and antioxidant indicators of the two groups were compared. Results: In comparison to placebo groups, there was a significant difference in levels of total antioxidant capacity, superoxide dismutase, catalase, glutathione reductase, and glutathione peroxidase; these antioxidant indicators were higher in the intervention group (P<0.05). Moreover, a considerable decrease in weight, CRP and IL-6 levels were observed in patients in the S.Khuzestanica group. Conclusion: Our findings may provide novel complementary treatments without adverse effects for diabetes complications.Keywords: Satureja khuzestanica Jamzad, diabetes mellitus, antioxidant indicators, IL-6, C-reactive protein
Procedia PDF Downloads 707603 Object-Scene: Deep Convolutional Representation for Scene Classification
Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang
Abstract:
Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization
Procedia PDF Downloads 3317602 Correlation between Initial Absorption of the Cover Concrete, the Compressive Strength and Carbonation Depth
Authors: Bouzidi Yassine
Abstract:
This experimental work was aimed to characterize the porosity of the concrete cover zone using the capillary absorption test, and establish the links between open porosity characterized by the initial absorption, the compressive strength and carbonation depth. Eight formulations of workability similar made from ordinary Portland cement (CEM I 42.5) and a compound cement (CEM II/B 42.5) four of each type are studied. The results allow us to highlight the effect of the cement type. Indeed, concretes-based cement CEM II/B 42.5 carbonatent approximately faster than concretes-based cement CEM I 42.5. This effect is attributed in part to the lower content of portlandite Ca(OH)2 of concretes-based cement CEM II/B 42.5, but also the impact of the cement type on the open porosity of the cover concrete. The open porosity of concretes-based cement CEM I 42.5 is lower than that of concretes-based cement CEM II/B 42.5. The carbonation depth is a decreasing function of the compressive strength at 28 days and increases with the initial absorption. Through the results obtained, correlations between the quantity of water absorbed in 1 h, the carbonation depth at 180 days and the compressive strength at 28 days were performed in an acceptable manner.Keywords: initial absorption, cover concrete, compressive strength, carbonation depth
Procedia PDF Downloads 3367601 Feedback in the Language Class: An Action Research Process
Authors: Arash Golzari Koloor
Abstract:
Feedback seems to be an inseparable part of teaching a second/foreign language. One type of feedback is corrective feedback which is one type of error treatment in second language classrooms. This study is a report on the types of corrective feedback employed in an IELTS preparation course. The types of feedback, their frequencies, and their effectiveness are enlisted, enumerated, and interpreted. The results showed that explicit correction and recast were the most frequent types of feedback while repetition and elicitation were the least. The results also revealed that metalinguistic feedback, elicitation, and explicit correction were the most effective types of feedback and affected learners performance greatly.Keywords: classroom interaction, corrective feedback, error treatment, oral performance
Procedia PDF Downloads 3347600 Machine Learning Approach for Automating Electronic Component Error Classification and Detection
Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski
Abstract:
The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.Keywords: augmented reality, machine learning, object recognition, virtual laboratories
Procedia PDF Downloads 1347599 Multimedia Container for Autonomous Car
Authors: Janusz Bobulski, Mariusz Kubanek
Abstract:
The main goal of the research is to develop a multimedia container structure containing three types of images: RGB, lidar and infrared, properly calibrated to each other. An additional goal is to develop program libraries for creating and saving this type of file and for restoring it. It will also be necessary to develop a method of data synchronization from lidar and RGB cameras as well as infrared. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. Autonomous cars are increasingly breaking into our consciousness. No one seems to have any doubts that self-driving cars are the future of motoring. Manufacturers promise that moving the first of them to showrooms is the prospect of the next few years. Many experts believe that creating a network of communicating autonomous cars will be able to completely eliminate accidents. However, to make this possible, it is necessary to develop effective methods of detection of objects around the moving vehicle. In bad weather conditions, this task is difficult on the basis of the RGB(red, green, blue) image. Therefore, in such situations, you should be supported by information from other sources, such as lidar or infrared cameras. The problem is the different data formats that individual types of devices return. In addition to these differences, there is a problem with the synchronization of these data and the formatting of this data. The goal of the project is to develop a file structure that could be containing a different type of data. This type of file is calling a multimedia container. A multimedia container is a container that contains many data streams, which allows you to store complete multimedia material in one file. Among the data streams located in such a container should be indicated streams of images, films, sounds, subtitles, as well as additional information, i.e., metadata. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. As shown by preliminary studies, the use of combining RGB and InfraRed images with Lidar data allows for easier data analysis. Thanks to this application, it will be possible to display the distance to the object in a color photo. Such information can be very useful for drivers and for systems in autonomous cars.Keywords: an autonomous car, image processing, lidar, obstacle detection
Procedia PDF Downloads 225