Search results for: geographically weighted principal components analysis
30884 QSRR Analysis of 17-Picolyl and 17-Picolinylidene Androstane Derivatives Based on Partial Least Squares and Principal Component Regression
Authors: Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Lidija Jevrić, Evgenija Djurendić, Jovana Ajduković
Abstract:
There are several methods for determination of the lipophilicity of biologically active compounds, however chromatography has been shown as a very suitable method for this purpose. Chromatographic (C18-RP-HPLC) analysis of a series of 24 17-picolyl and 17-picolinylidene androstane derivatives was carried out. The obtained retention indices (logk, methanol (90%) / water (10%)) were correlated with calculated physicochemical and lipophilicity descriptors. The QSRR analysis was carried out applying principal component regression (PCR) and partial least squares regression (PLS). The PCR and PLS model were selected on the basis of the highest variance and the lowest root mean square error of cross-validation. The obtained PCR and PLS model successfully correlate the calculated molecular descriptors with logk parameter indicating the significance of the lipophilicity of compounds in chromatographic process. On the basis of the obtained results it can be concluded that the obtained logk parameters of the analyzed androstane derivatives can be considered as their chromatographic lipophilicity. These results are the part of the project No. 114-451-347/2015-02, financially supported by the Provincial Secretariat for Science and Technological Development of Vojvodina and CMST COST Action CM1105.Keywords: androstane derivatives, chromatography, molecular structure, principal component regression, partial least squares regression
Procedia PDF Downloads 27630883 Fuzzy Set Qualitative Comparative Analysis in Business Models' Study
Authors: K. Debkowska
Abstract:
The aim of this article is presenting the possibilities of using Fuzzy Set Qualitative Comparative Analysis (fsQCA) in researches concerning business models of enterprises. FsQCA is a bridge between quantitative and qualitative researches. It's potential can be used in analysis and evaluation of business models. The article presents the results of a study conducted on the basis of enterprises belonging to different sectors: transport and logistics, industry, building construction, and trade. The enterprises have been researched taking into account the components of business models and the financial condition of companies. Business models are areas of complex and heterogeneous nature. The use of fsQCA has enabled to answer the following question: which components of a business model and in which configuration influence better financial condition of enterprises. The analysis has been performed separately for particular sectors. This enabled to compare the combinations of business models' components which actively influence the financial condition of enterprises in analyzed sectors. The following components of business models were analyzed for the purposes of the study: Key Partners, Key Activities, Key Resources, Value Proposition, Channels, Cost Structure, Revenue Streams, Customer Segment and Customer Relationships. These components of the study constituted the variables shaping the financial results of enterprises. The results of the study lead us to believe that fsQCA can help in analyzing and evaluating a business model, which is important in terms of making a business decision about the business model used or its change. In addition, results obtained by fsQCA can be applied by all stakeholders connected with the company.Keywords: business models, components of business models, data analysis, fsQCA
Procedia PDF Downloads 17030882 Humeral Head and Scapula Detection in Proton Density Weighted Magnetic Resonance Images Using YOLOv8
Authors: Aysun Sezer
Abstract:
Magnetic Resonance Imaging (MRI) is one of the advanced diagnostic tools for evaluating shoulder pathologies. Proton Density (PD)-weighted MRI sequences prove highly effective in detecting edema. However, they are deficient in the anatomical identification of bones due to a trauma-induced decrease in signal-to-noise ratio and blur in the traumatized cortices. Computer-based diagnostic systems require precise segmentation, identification, and localization of anatomical regions in medical imagery. Deep learning-based object detection algorithms exhibit remarkable proficiency in real-time object identification and localization. In this study, the YOLOv8 model was employed to detect humeral head and scapular regions in 665 axial PD-weighted MR images. The YOLOv8 configuration achieved an overall success rate of 99.60% and 89.90% for detecting the humeral head and scapula, respectively, with an intersection over union (IoU) of 0.5. Our findings indicate a significant promise of employing YOLOv8-based detection for the humerus and scapula regions, particularly in the context of PD-weighted images affected by both noise and intensity inhomogeneity.Keywords: YOLOv8, object detection, humerus, scapula, IRM
Procedia PDF Downloads 6630881 Investigating the Demand of Short-Shelf Life Food Products for SME Wholesalers
Authors: Yamini Raju, Parminder S. Kang, Adam Moroz, Ross Clement, Alistair Duffy, Ashley Hopwell
Abstract:
Accurate prediction of fresh produce demand is one the challenges faced by Small Medium Enterprise (SME) wholesalers. Current research in this area focused on limited number of factors specific to a single product or a business type. This paper gives an overview of the current literature on the variability factors used to predict demand and the existing forecasting techniques of short shelf life products. It then extends it by adding new factors and investigating if there is a time lag and possibility of noise in the orders. It also identifies the most important factors using correlation and Principal Component Analysis (PCA).Keywords: demand forecasting, deteriorating products, food wholesalers, principal component analysis, variability factors
Procedia PDF Downloads 52030880 AgriFood Model in Ankara Regional Innovation Strategy
Authors: Coskun Serefoglu
Abstract:
The study aims to analyse how a traditional sector such as agri-food could be mobilized through regional innovation strategies. A principal component analysis as well as qualitative information, such as in-depth interviews, focus group and surveys, were employed to find the priority sectors. An agri-food model was developed which includes both a linear model and interactive model. The model consists of two main components, one of which is technological integration and the other one is agricultural extension which is based on Land-grant university approach of U.S. which is not a common practice in Turkey.Keywords: regional innovation strategy, interactive model, agri-food sector, local development, planning, regional development
Procedia PDF Downloads 14930879 Air Quality Forecast Based on Principal Component Analysis-Genetic Algorithm and Back Propagation Model
Authors: Bin Mu, Site Li, Shijin Yuan
Abstract:
Under the circumstance of environment deterioration, people are increasingly concerned about the quality of the environment, especially air quality. As a result, it is of great value to give accurate and timely forecast of AQI (air quality index). In order to simplify influencing factors of air quality in a city, and forecast the city’s AQI tomorrow, this study used MATLAB software and adopted the method of constructing a mathematic model of PCA-GABP to provide a solution. To be specific, this study firstly made principal component analysis (PCA) of influencing factors of AQI tomorrow including aspects of weather, industry waste gas and IAQI data today. Then, we used the back propagation neural network model (BP), which is optimized by genetic algorithm (GA), to give forecast of AQI tomorrow. In order to verify validity and accuracy of PCA-GABP model’s forecast capability. The study uses two statistical indices to evaluate AQI forecast results (normalized mean square error and fractional bias). Eventually, this study reduces mean square error by optimizing individual gene structure in genetic algorithm and adjusting the parameters of back propagation model. To conclude, the performance of the model to forecast AQI is comparatively convincing and the model is expected to take positive effect in AQI forecast in the future.Keywords: AQI forecast, principal component analysis, genetic algorithm, back propagation neural network model
Procedia PDF Downloads 22830878 Assessment of Social Vulnerability of Urban Population to Floods – a Case Study of Mumbai
Authors: Sherly M. A., Varsha Vijaykumar, Subhankar Karmakar, Terence Chan, Christian Rau
Abstract:
This study aims at proposing an indicator-based framework for assessing social vulnerability of any coastal megacity to floods. The final set of indicators of social vulnerability are chosen from a set of feasible and available indicators which are prepared using a Geographic Information System (GIS) framework on a smaller scale considering 1-km grid cell to provide an insight into the spatial variability of vulnerability. The optimal weight for each individual indicator is assigned using data envelopment analysis (DEA) as it avoids subjective weights and improves the confidence on the results obtained. In order to de-correlate and reduce the dimension of multivariate data, principal component analysis (PCA) has been applied. The proposed methodology is demonstrated on twenty four wards of Mumbai under the jurisdiction of Municipal Corporation of Greater Mumbai (MCGM). This framework of vulnerability assessment is not limited to the present study area, and may be applied to other urban damage centers.Keywords: urban floods, vulnerability, data envelopment analysis, principal component analysis
Procedia PDF Downloads 36130877 Critical Analysis of Heat Exchanger Cycle for its Maintainability Using Failure Modes and Effect Analysis and Pareto Analysis
Authors: Sayali Vyas, Atharva Desai, Shreyas Badave, Apurv Kulkarni, B. Rajiv
Abstract:
The Failure Modes and Effect Analysis (FMEA) is an efficient evaluation technique to identify potential failures in products, processes, and services. FMEA is designed to identify and prioritize failure modes. It proves to be a useful method for identifying and correcting possible failures at its earliest possible level so that one can avoid consequences of poor performance. In this paper, FMEA tool is used in detection of failures of various components of heat exchanger cycle and to identify critical failures of the components which may hamper the system’s performance. Further, a detailed Pareto analysis is done to find out the most critical components of the cycle, the causes of its failures, and possible recommended actions. This paper can be used as a checklist which will help in maintainability of the system.Keywords: FMEA, heat exchanger cycle, Ishikawa diagram, pareto analysis, RPN (Risk Priority Number)
Procedia PDF Downloads 40230876 Automatic Seizure Detection Using Weighted Permutation Entropy and Support Vector Machine
Authors: Noha Seddik, Sherine Youssef, Mohamed Kholeif
Abstract:
The automated epileptic seizure detection research field has emerged in the recent years; this involves analyzing the Electroencephalogram (EEG) signals instead of the traditional visual inspection performed by expert neurologists. In this study, a Support Vector Machine (SVM) that uses Weighted Permutation Entropy (WPE) as the input feature is proposed for classifying normal and seizure EEG records. WPE is a modified statistical parameter of the permutation entropy (PE) that measures the complexity and irregularity of a time series. It incorporates both the mapped ordinal pattern of the time series and the information contained in the amplitude of its sample points. The proposed system utilizes the fact that entropy based measures for the EEG segments during epileptic seizure are lower than in normal EEG.Keywords: electroencephalogram (EEG), epileptic seizure detection, weighted permutation entropy (WPE), support vector machine (SVM)
Procedia PDF Downloads 37230875 Matlab/Simulink Simulation of Solar Energy Storage System
Authors: Mustafa A. Al-Refai
Abstract:
This paper investigates the energy storage technologies that can potentially enhance the use of solar energy. Water electrolysis systems are seen as the principal means of producing a large amount of hydrogen in the future. Starting from the analysis of the models of the system components, a complete simulation model was realized in the Matlab-Simulink environment. Results of the numerical simulations are provided. The operation of electrolysis and photovoltaic array combination is verified at various insulation levels. It is pointed out that solar cell arrays and electrolysers are producing the expected results with solar energy inputs that are continuously varying.Keywords: electrolyzer, simulink, solar energy, storage system
Procedia PDF Downloads 43430874 Review, Analysis and Simulation of Advanced Technology Solutions of Selected Components in Power Electronics Systems (PES) of More Electric Aircraft
Authors: Lucjan Setlak, Emil Ruda
Abstract:
The subject of this paper is to review, comparative analysis and simulation of selected components of power electronic systems (PES), consistent with the concept of a more electric aircraft (MEA). Comparative analysis and simulation in software environment MATLAB / Simulink were carried out based on a group of representatives of civil aircraft (B-787, A-380) and military (F-22 Raptor, F-35) in the context of multi-pulse converters used in them (6- and 12-pulse, and 18- and 24-pulse), which are key components of high-tech electronics on-board power systems of autonomous power systems (ASE) of modern aircraft (airplanes of the future).Keywords: converters, electric machines, MEA (more electric aircraft), PES (power electronics systems)
Procedia PDF Downloads 49430873 Use of Machine Learning Algorithms to Pediatric MR Images for Tumor Classification
Authors: I. Stathopoulos, V. Syrgiamiotis, E. Karavasilis, A. Ploussi, I. Nikas, C. Hatzigiorgi, K. Platoni, E. P. Efstathopoulos
Abstract:
Introduction: Brain and central nervous system (CNS) tumors form the second most common group of cancer in children, accounting for 30% of all childhood cancers. MRI is the key imaging technique used for the visualization and management of pediatric brain tumors. Initial characterization of tumors from MRI scans is usually performed via a radiologist’s visual assessment. However, different brain tumor types do not always demonstrate clear differences in visual appearance. Using only conventional MRI to provide a definite diagnosis could potentially lead to inaccurate results, and so histopathological examination of biopsy samples is currently considered to be the gold standard for obtaining definite diagnoses. Machine learning is defined as the study of computational algorithms that can use, complex or not, mathematical relationships and patterns from empirical and scientific data to make reliable decisions. Concerning the above, machine learning techniques could provide effective and accurate ways to automate and speed up the analysis and diagnosis for medical images. Machine learning applications in radiology are or could potentially be useful in practice for medical image segmentation and registration, computer-aided detection and diagnosis systems for CT, MR or radiography images and functional MR (fMRI) images for brain activity analysis and neurological disease diagnosis. Purpose: The objective of this study is to provide an automated tool, which may assist in the imaging evaluation and classification of brain neoplasms in pediatric patients by determining the glioma type, grade and differentiating between different brain tissue types. Moreover, a future purpose is to present an alternative way of quick and accurate diagnosis in order to save time and resources in the daily medical workflow. Materials and Methods: A cohort, of 80 pediatric patients with a diagnosis of posterior fossa tumor, was used: 20 ependymomas, 20 astrocytomas, 20 medulloblastomas and 20 healthy children. The MR sequences used, for every single patient, were the following: axial T1-weighted (T1), axial T2-weighted (T2), FluidAttenuated Inversion Recovery (FLAIR), axial diffusion weighted images (DWI), axial contrast-enhanced T1-weighted (T1ce). From every sequence only a principal slice was used that manually traced by two expert radiologists. Image acquisition was carried out on a GE HDxt 1.5-T scanner. The images were preprocessed following a number of steps including noise reduction, bias-field correction, thresholding, coregistration of all sequences (T1, T2, T1ce, FLAIR, DWI), skull stripping, and histogram matching. A large number of features for investigation were chosen, which included age, tumor shape characteristics, image intensity characteristics and texture features. After selecting the features for achieving the highest accuracy using the least number of variables, four machine learning classification algorithms were used: k-Nearest Neighbour, Support-Vector Machines, C4.5 Decision Tree and Convolutional Neural Network. The machine learning schemes and the image analysis are implemented in the WEKA platform and MatLab platform respectively. Results-Conclusions: The results and the accuracy of images classification for each type of glioma by the four different algorithms are still on process.Keywords: image classification, machine learning algorithms, pediatric MRI, pediatric oncology
Procedia PDF Downloads 14930872 Disparities in the Levels of Economic Development in Uttar Pradesh: A Regional Analysis
Authors: Naushaba Naseem Ahmed
Abstract:
Economic development does not merely depend upon the level of development but also on its distributive aspect. As it is a serious issue, the fruit of development is not equally distributed among the different section of peoples and different part of the country this cause the regional disparities in the levels of social economic development. Different part of the country has different resource endowments in term of natural, human and capital. If there is the uniform condition to grow, these areas that have better resources, are favourably placed grow comparatively faster as other areas. Thus with the very stage of development, gap between resourceful and less resourceful area goes on widening. This paper is an attempt to highlight the levels of disparities in term of economic development with the help of selected variables. Principal component analysis, correlation, and coefficient of variation are the techniques which were used in paper and employed published data for analysis. The result shows that Western region of Uttar Pradesh is more developed followed by Central Region. There will be urgent need in investment and developmental policies for the backward region like Bundelkhand region of Uttar Pradesh.Keywords: coefficient of variation, correlation, economic development, principal component analysis
Procedia PDF Downloads 26130871 Identification and Classification of Fiber-Fortified Semolina by Near-Infrared Spectroscopy (NIR)
Authors: Amanda T. Badaró, Douglas F. Barbin, Sofia T. Garcia, Maria Teresa P. S. Clerici, Amanda R. Ferreira
Abstract:
Food fortification is the intentional addition of a nutrient in a food matrix and has been widely used to overcome the lack of nutrients in the diet or increasing the nutritional value of food. Fortified food must meet the demand of the population, taking into account their habits and risks that these foods may cause. Wheat and its by-products, such as semolina, has been strongly indicated to be used as a food vehicle since it is widely consumed and used in the production of other foods. These products have been strategically used to add some nutrients, such as fibers. Methods of analysis and quantification of these kinds of components are destructive and require lengthy sample preparation and analysis. Therefore, the industry has searched for faster and less invasive methods, such as Near-Infrared Spectroscopy (NIR). NIR is a rapid and cost-effective method, however, it is based on indirect measurements, yielding high amount of data. Therefore, NIR spectroscopy requires calibration with mathematical and statistical tools (Chemometrics) to extract analytical information from the corresponding spectra, as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). PCA is well suited for NIR, once it can handle many spectra at a time and be used for non-supervised classification. Advantages of the PCA, which is also a data reduction technique, is that it reduces the data spectra to a smaller number of latent variables for further interpretation. On the other hand, LDA is a supervised method that searches the Canonical Variables (CV) with the maximum separation among different categories. In LDA, the first CV is the direction of maximum ratio between inter and intra-class variances. The present work used a portable infrared spectrometer (NIR) for identification and classification of pure and fiber-fortified semolina samples. The fiber was added to semolina in two different concentrations, and after the spectra acquisition, the data was used for PCA and LDA to identify and discriminate the samples. The results showed that NIR spectroscopy associate to PCA was very effective in identifying pure and fiber-fortified semolina. Additionally, the classification range of the samples using LDA was between 78.3% and 95% for calibration and 75% and 95% for cross-validation. Thus, after the multivariate analysis such as PCA and LDA, it was possible to verify that NIR associated to chemometric methods is able to identify and classify the different samples in a fast and non-destructive way.Keywords: Chemometrics, fiber, linear discriminant analysis, near-infrared spectroscopy, principal component analysis, semolina
Procedia PDF Downloads 21230870 Social Vulnerability Mapping in New York City to Discuss Current Adaptation Practice
Authors: Diana Reckien
Abstract:
Vulnerability assessments are increasingly used to support policy-making in complex environments, like urban areas. Usually, vulnerability studies include the construction of aggregate (sub-) indices and the subsequent mapping of indices across an area of interest. Vulnerability studies show a couple of advantages: they are great communication tools, can inform a wider general debate about environmental issues, and can help allocating and efficiently targeting scarce resources for adaptation policy and planning. However, they also have a number of challenges: Vulnerability assessments are constructed on the basis of a wide range of methodologies and there is no single framework or methodology that has proven to serve best in certain environments, indicators vary highly according to the spatial scale used, different variables and metrics produce different results, and aggregate or composite vulnerability indicators that are mapped easily distort or bias the picture of vulnerability as they hide the underlying causes of vulnerability and level out conflicting reasons of vulnerability in space. So, there is urgent need to further develop the methodology of vulnerability studies towards a common framework, which is one reason of the paper. We introduce a social vulnerability approach, which is compared with other approaches of bio-physical or sectoral vulnerability studies relatively developed in terms of a common methodology for index construction, guidelines for mapping, assessment of sensitivity, and verification of variables. Two approaches are commonly pursued in the literature. The first one is an additive approach, in which all potentially influential variables are weighted according to their importance for the vulnerability aspect, and then added to form a composite vulnerability index per unit area. The second approach includes variable reduction, mostly Principal Component Analysis (PCA) that reduces the number of variables that are interrelated into a smaller number of less correlating components, which are also added to form a composite index. We test these two approaches of constructing indices on the area of New York City as well as two different metrics of variables used as input and compare the outcome for the 5 boroughs of NY. Our analysis yields that the mapping exercise yields particularly different results in the outer regions and parts of the boroughs, such as Outer Queens and Staten Island. However, some of these parts, particularly the coastal areas receive the highest attention in the current adaptation policy. We imply from this that the current adaptation policy and practice in NY might need to be discussed, as these outer urban areas show relatively low social vulnerability as compared with the more central parts, i.e. the high dense areas of Manhattan, Central Brooklyn, Central Queens and the Southern Bronx. The inner urban parts receive lesser adaptation attention, but bear a higher risk of damage in case of hazards in those areas. This is conceivable, e.g., during large heatwaves, which would more affect more the inner and poorer parts of the city as compared with the outer urban areas. In light of the recent planning practice of NY one needs to question and discuss who in NY makes adaptation policy for whom, but the presented analyses points towards an under representation of the needs of the socially vulnerable population, such as the poor, the elderly, and ethnic minorities, in the current adaptation practice in New York City.Keywords: vulnerability mapping, social vulnerability, additive approach, Principal Component Analysis (PCA), New York City, United States, adaptation, social sensitivity
Procedia PDF Downloads 39530869 Potential Ecological Risk Assessment of Selected Heavy Metals in Sediments of Tidal Flat Marsh, the Case Study: Shuangtai Estuary, China
Authors: Chang-Fa Liu, Yi-Ting Wang, Yuan Liu, Hai-Feng Wei, Lei Fang, Jin Li
Abstract:
Heavy metals in sediments can cause adverse ecological effects while it exceeds a given criteria. The present study investigated sediment environmental quality, pollutant enrichment, ecological risk, and source identification for copper, cadmium, lead, zinc, mercury, and arsenic in the sediments collected from tidal flat marsh of Shuangtai estuary, China. The arithmetic mean integrated pollution index, geometric mean integrated pollution index, fuzzy integrated pollution index, and principal component score were used to characterize sediment environmental quality; fuzzy similarity and geo-accumulation Index were used to evaluate pollutant enrichment; correlation matrix, principal component analysis, and cluster analysis were used to identify source of pollution; environmental risk index and potential ecological risk index were used to assess ecological risk. The environmental qualities of sediment are classified to very low degree of contamination or low contamination. The similar order to element background of soil in the Liaohe plain is region of Sanjiaozhou, Honghaitan, Sandaogou, Xiaohe by pollutant enrichment analysis. The source identification indicates that correlations are significantly among metals except between copper and cadmium. Cadmium, lead, zinc, mercury, and arsenic will be clustered in the same clustering as the first principal component. Copper will be clustered as second principal component. The environmental risk assessment level will be scaled to no risk in the studied area. The order of potential ecological risk is As > Cd > Hg > Cu > Pb > Zn.Keywords: ecological risk assessment, heavy metals, sediment, marsh, Shuangtai estuary
Procedia PDF Downloads 34730868 Knowledge of Strategies to Teach Reading Components Among Teachers of Hard of Hearing Students
Authors: Khalid Alasim
Abstract:
This study investigated Saudi Arabian elementary school teachers’ knowledge of strategies to teach reading components to hard-of-hearing students. The study focused on four of the five reading components the National Reading Panel (NPR, 2000) identified: phonemic awareness; phonics; vocabulary, and reading comprehension, and explored the relationship between teachers’ demographic characteristics and their knowledge of the strategies as well. An explanatory sequential mixed methods design was used that included two phases. The quantitative phase examined the knowledge of these Arabic reading components among 89 elementary school teachers of hard-of-hearing students, and the qualitative phase consisted of interviews with 10 teachers. The results indicated that the teachers have a great deal of knowledge (above the mean score) of strategies to teach reading components. Specifically, teachers’ knowledge of strategies to teach the vocabulary component was the highest. The results also showed no significant association between teachers’ demographic characteristics and their knowledge of strategies to teach reading components. The qualitative analysis revealed two themes: 1) teachers’ lack of basic knowledge of strategies to teach reading components, and 2) the absence of in-service courses and training programs in reading for teachers.Keywords: knowledge, reading, components, hard-of-hearing, phonology, vocabulary
Procedia PDF Downloads 8030867 Preservation Model to Process 'La Bomba Del Chota' as a Living Cultural Heritage
Authors: Lucia Carrion Gordon, Maria Gabriela Lopez Yanez
Abstract:
This project focuses on heritage concepts and their importance in every evolving and changing Digital Era where system solutions have to be sustainable, efficient and suitable to the basic needs. The prototype has to cover the principal requirements for the case studies. How to preserve the sociological ideas of dances in Ecuador like ‘La Bomba’ is the best example and challenge to preserve the intangible data. The same idea is applicable with books and music. The History and how to keep it, is the principal mission of Heritage Preservation. The dance of La Bomba is rooted on a specific movement system whose main part is the sideward hip movement. La Bomba´s movement system is the surface manifestation of a whole system of knowledge whose principal characteristics are the historical relation of Chote˜nos with their land and their families.Keywords: digital preservation, heritage, IT management, data, metadata, ontology, serendipity
Procedia PDF Downloads 38630866 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data
Authors: Nicola Colaninno, Eugenio Morello
Abstract:
The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing
Procedia PDF Downloads 19430865 Analysis of the Significance of Multimedia Channels Using Sparse PCA and Regularized SVD
Authors: Kourosh Modarresi
Abstract:
The abundance of media channels and devices has given users a variety of options to extract, discover, and explore information in the digital world. Since, often, there is a long and complicated path that a typical user may venture before taking any (significant) action (such as purchasing goods and services), it is critical to know how each node (media channel) in the path of user has contributed to the final action. In this work, the significance of each media channel is computed using statistical analysis and machine learning techniques. More specifically, “Regularized Singular Value Decomposition”, and “Sparse Principal Component” has been used to compute the significance of each channel toward the final action. The results of this work are a considerable improvement compared to the present approaches.Keywords: multimedia attribution, sparse principal component, regularization, singular value decomposition, feature significance, machine learning, linear systems, variable shrinkage
Procedia PDF Downloads 30930864 Assessment of Ground Water Potential Zone: A Case Study of Paramakudi Taluk, Ramanathapuram, Tamilnadu, India
Authors: Shri Devi
Abstract:
This paper was conducted to see the ground water potential zones in Paramakudi taluk, Ramanathapuram,Tamilnadu India with a total areal extent of 745 sq. km. The various thematic map have been prepared for the study such as soil, geology, geomorphology, drainage, land use of the particular study area using the Toposheet of 1: 50000. The digital elevation model (DEM) has been generated from contour interval of 10m and also the slope was prepared. The ground water potential zone of the region was obtained using the weighted overlay analysis for which all the thematic maps were overlayed in arc gis 10.2. For the particular output the ranking has been given for all the parameters of each thematic layer with different weightage such as 25% was given to soil, 25% to geomorphology and land use land cover also 25%, slope 15%, lineament with 5% and drainage streams with 5 percentage. Using these entire potential zone maps was prepared which was overlayed with the village map to check the region which has good, moderate and low groundwater potential zone.Keywords: GIS, ground water, Paramakudi, weighted overlay analysis
Procedia PDF Downloads 34230863 Application of Change Detection Techniques in Monitoring Environmental Phenomena: A Review
Authors: T. Garba, Y. Y. Babanyara, T. O. Quddus, A. K. Mukatari
Abstract:
Human activities make environmental parameters in order to keep on changing globally. While some changes are necessary and beneficial to flora and fauna, others have serious consequences threatening the survival of their natural habitat if these changes are not properly monitored and mitigated. In-situ assessments are characterized by many challenges due to the absence of time series data and sometimes areas to be observed or monitored are inaccessible. Satellites Remote Sensing provide us with the digital images of same geographic areas within a pre-defined interval. This makes it possible to monitor and detect changes of environmental phenomena. This paper, therefore, reviewed the commonly use changes detection techniques globally such as image differencing, image rationing, image regression, vegetation index difference, change vector analysis, principal components analysis, multidate classification, post-classification comparison, and visual interpretation. The paper concludes by suggesting the use of more than one technique.Keywords: environmental phenomena, change detection, monitor, techniques
Procedia PDF Downloads 27430862 Solving Single Machine Total Weighted Tardiness Problem Using Gaussian Process Regression
Authors: Wanatchapong Kongkaew
Abstract:
This paper proposes an application of probabilistic technique, namely Gaussian process regression, for estimating an optimal sequence of the single machine with total weighted tardiness (SMTWT) scheduling problem. In this work, the Gaussian process regression (GPR) model is utilized to predict an optimal sequence of the SMTWT problem, and its solution is improved by using an iterated local search based on simulated annealing scheme, called GPRISA algorithm. The results show that the proposed GPRISA method achieves a very good performance and a reasonable trade-off between solution quality and time consumption. Moreover, in the comparison of deviation from the best-known solution, the proposed mechanism noticeably outperforms the recently existing approaches.Keywords: Gaussian process regression, iterated local search, simulated annealing, single machine total weighted tardiness
Procedia PDF Downloads 30930861 A Weighted Approach to Unconstrained Iris Recognition
Authors: Yao-Hong Tsai
Abstract:
This paper presents a weighted approach to unconstrained iris recognition. Nowadays, commercial systems are usually characterized by strong acquisition constraints based on the subject’s cooperation. However, it is not always achievable for real scenarios in our daily life. Researchers have been focused on reducing these constraints and maintaining the performance of the system by new techniques at the same time. With large variation in the environment, there are two main improvements to develop the proposed iris recognition system. For solving extremely uneven lighting condition, statistic based illumination normalization is first used on eye region to increase the accuracy of iris feature. The detection of the iris image is based on Adaboost algorithm. Secondly, the weighted approach is designed by Gaussian functions according to the distance to the center of the iris. Furthermore, local binary pattern (LBP) histogram is then applied to texture classification with the weight. Experiment showed that the proposed system provided users a more flexible and feasible way to interact with the verification system through iris recognition.Keywords: authentication, iris recognition, adaboost, local binary pattern
Procedia PDF Downloads 22430860 Statistical Wavelet Features, PCA, and SVM-Based Approach for EEG Signals Classification
Authors: R. K. Chaurasiya, N. D. Londhe, S. Ghosh
Abstract:
The study of the electrical signals produced by neural activities of human brain is called Electroencephalography. In this paper, we propose an automatic and efficient EEG signal classification approach. The proposed approach is used to classify the EEG signal into two classes: epileptic seizure or not. In the proposed approach, we start with extracting the features by applying Discrete Wavelet Transform (DWT) in order to decompose the EEG signals into sub-bands. These features, extracted from details and approximation coefficients of DWT sub-bands, are used as input to Principal Component Analysis (PCA). The classification is based on reducing the feature dimension using PCA and deriving the support-vectors using Support Vector Machine (SVM). The experimental are performed on real and standard dataset. A very high level of classification accuracy is obtained in the result of classification.Keywords: discrete wavelet transform, electroencephalogram, pattern recognition, principal component analysis, support vector machine
Procedia PDF Downloads 63830859 X̄ and S Control Charts based on Weighted Standard Deviation Method
Authors: Derya Karagöz
Abstract:
A Shewhart chart based on normality assumption is not appropriate for skewed distributions since its Type-I error rate is inflated. This study presents X̄ and S control charts for monitoring the process variability for skewed distributions. We propose Weighted Standard Deviation (WSD) X̄ and S control charts. Standard deviation estimator is applied to monitor the process variability for estimating the process standard deviation, in the case of the W SD X̄ and S control charts as this estimator is simple and easy to compute. Unlike the Shewhart control chart, the proposed charts provide asymmetric limits in accordance with the direction and degree of skewness to construct the upper and lower limits. The performances of the proposed charts are compared with other heuristic charts for skewed distributions by using Simulation study. The Simulation studies show that the proposed control charts have good properties for skewed distributions and large sample sizes.Keywords: weighted standard deviation, MAD, skewed distributions, S control charts
Procedia PDF Downloads 39930858 Modified Design of Flyer with Reduced Weight for Use in Textile Machinery
Authors: Payal Patel
Abstract:
Textile machinery is one of the fastest evolving areas which has an application of mechanical engineering. The modular approach towards the processing right from the stage of cotton to the fabric, allows us to observe the result of each process on its input. Cost and space being the major constraints. The flyer is a component of roving machine, which is used as a part of spinning process. In the present work using the application of Hyper Works, the flyer arm has been modified which saves the material used for manufacturing the flyer. The size optimization of the flyer is carried out with the objective of reduction in weight under the constraints of standard operating conditions. The new design of the flyer is proposed and validated using the module of HyperWorks which is equally strong, but light weighted compared to the existing design. Dynamic balancing of the optimized model is carried out to align a principal inertia axis with the geometric axis of rotation. For the balanced geometry of flyer, air resistance is obtained theoretically and with Gambit and Fluent. Static analysis of the balanced geometry has been done to verify the constraint of operating condition. Comparison of weight, deflection, and factor of safety has been made for different aluminum alloys.Keywords: flyer, size optimization, textile, weight
Procedia PDF Downloads 21530857 The Hierarchical Model of Fitness Services Quality Perception in Serbia
Authors: Mirjana Ilic, Dragan Zivotic, Aleksandra Perovic, Predrag Gavrilovic
Abstract:
The service quality perception depends on many factors, such as the area in which the services are provided, socioeconomic status, educational status, experience, age and gender of consumers, as well as many others. For this reason, it is not possible to apply instrument for establishing the service quality perception that is developed in other areas and in other populations. The aim of the research was to form an instrument for assessing the quality perception in the field of fitness in Serbia. After analyzing the available literature and conducting a pilot research, there were 15 isolated areas in which it was possible to observe the service quality perception. The areas included: material and technical basis, secondary facilities, coaches, programs, reliability, credibility, security, rapid response, compassion, communication, prices, satisfaction, loyalty, quality outcomes and motives. These areas were covered by a questionnaire consisted of 100 items where the number of items varied from area to area from 3 up to 11. The questionnaire was administered to 350 subjects of both genders (174 men and 176 women) aged from 18 to 68 years, being beneficiaries of fitness services for at least 1 year. In each of the areas was conducted a factor analysis in its exploratory form by principal components method. The number of significant factors has been determined in accordance with the Kaiser Guttman criterion. The initial factor solutions were simplified using the Varimax rotation. Analyses per areas have produced from 1 to 4 factors. Afterward, the factor analysis of factor scores on the first principal component of each of the respondents in each of the analyzed area was performed, and the factor structure was obtained with four latent dimensions interpreted as offer, the relationship with the coaches, the experience of quality and the initial impression. This factor structure was analysed by hierarchical analysis of Oblique factors, which in the second order space produced single factor interpreted as a general factor of the service quality perception. The resulting questionnaire represents an instrument which can serve managers in the field of fitness to optimize the centers development, raising the quality of services in line with consumers needs and expectations.Keywords: fitness, hierarchical model, quality perception, factor analysis
Procedia PDF Downloads 31130856 Hybrid Fuzzy Weighted K-Nearest Neighbor to Predict Hospital Readmission for Diabetic Patients
Authors: Soha A. Bahanshal, Byung G. Kim
Abstract:
Identification of patients at high risk for hospital readmission is of crucial importance for quality health care and cost reduction. Predicting hospital readmissions among diabetic patients has been of great interest to many researchers and health decision makers. We build a prediction model to predict hospital readmission for diabetic patients within 30 days of discharge. The core of the prediction model is a modified k Nearest Neighbor called Hybrid Fuzzy Weighted k Nearest Neighbor algorithm. The prediction is performed on a patient dataset which consists of more than 70,000 patients with 50 attributes. We applied data preprocessing using different techniques in order to handle data imbalance and to fuzzify the data to suit the prediction algorithm. The model so far achieved classification accuracy of 80% compared to other models that only use k Nearest Neighbor.Keywords: machine learning, prediction, classification, hybrid fuzzy weighted k-nearest neighbor, diabetic hospital readmission
Procedia PDF Downloads 18630855 A Ratio-Weighted Decision Tree Algorithm for Imbalance Dataset Classification
Authors: Doyin Afolabi, Phillip Adewole, Oladipupo Sennaike
Abstract:
Most well-known classifiers, including the decision tree algorithm, can make predictions on balanced datasets efficiently. However, the decision tree algorithm tends to be biased towards imbalanced datasets because of the skewness of the distribution of such datasets. To overcome this problem, this study proposes a weighted decision tree algorithm that aims to remove the bias toward the majority class and prevents the reduction of majority observations in imbalance datasets classification. The proposed weighted decision tree algorithm was tested on three imbalanced datasets- cancer dataset, german credit dataset, and banknote dataset. The specificity, sensitivity, and accuracy metrics were used to evaluate the performance of the proposed decision tree algorithm on the datasets. The evaluation results show that for some of the weights of our proposed decision tree, the specificity, sensitivity, and accuracy metrics gave better results compared to that of the ID3 decision tree and decision tree induced with minority entropy for all three datasets.Keywords: data mining, decision tree, classification, imbalance dataset
Procedia PDF Downloads 137