Search results for: weighted based clustering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28505

Search results for: weighted based clustering

27815 Analysis Rescuers' Viewpoint about Victims Tracking in Earthquake by Using Radio Frequency Identification (RFID)

Authors: Sima Ajami, Batool Akbari

Abstract:

Background: Radio frequency identification (RFID) system has been successfully applied to the areas of manufacturing, supply chain, agriculture, transportation, healthcare, and services. The RFID is already used to track and trace the victims in a disaster situation. Data can be collected in real time and be immediately available to emergency personnel and saves time by the RFID. Objectives: The aim of this study was, first, to identify stakeholders and customers for rescuing earthquake victims, second, to list key internal and external factors to use RFID to track earthquake victims, finally, to assess SWOT for rescuers' viewpoint. Materials and Methods: This study was an applied and analytical study. The study population included scholars, experts, planners, policy makers and rescuers in the "red crescent society of Isfahan province", "disaster management Isfahan province", "maintenance and operation department of Isfahan", "fire and safety services organization of Isfahan municipality", and "medical emergencies and disaster management center of Isfahan". After that, researchers held a workshop to teach participants about RFID and its usages in tracking earthquake victims. In the meanwhile of the workshop, participants identified, listed, and weighed key internal factors (strengths and weaknesses; SW) and external factors (opportunities and threats; OT) to use RFID in tracking earthquake victims. Therefore, participants put weigh strengths, weaknesses, opportunities, and threats (SWOT) and their weighted scales were calculated. Then, participants' opinions about this issue were assessed. Finally, according to the SWOT matrix, strategies to solve the weaknesses, problems, challenges, and threats through opportunities and strengths were proposed by participants. Results: The SWOT analysis showed that the total weighted score for internal and external factors were 3.91 (Internal Factor Evaluation) and 3.31 (External Factor Evaluation) respectively. Therefore, it was in a quadrant SO strategies cell in the SWOT analysis matrix and aggressive strategies were resulted. Organizations, scholars, experts, planners, policy makers and rescue workers should plan to use RFID technology in order to save more victims and manage their life. Conclusions: Researchers suppose to apply SO strategies and use a firm’s internal strength to take advantage of external opportunities. It is suggested, policy maker should plan to use the most developed technologies to save earthquake victims and deliver the easiest service to them. To do this, education, informing, and encouraging rescuers to use these technologies is essential. Originality/ Value: This study was a research paper that showed how RFID can be useful to track victims in earthquake.

Keywords: frequency identification system, strength, weakness, earthquake, victim

Procedia PDF Downloads 319
27814 Real-Time Classification of Marbles with Decision-Tree Method

Authors: K. S. Parlak, E. Turan

Abstract:

The separation of marbles according to the pattern quality is a process made according to expert decision. The classification phase is the most critical part in terms of economic value. In this study, a self-learning system is proposed which performs the classification of marbles quickly and with high success. This system performs ten feature extraction by taking ten marble images from the camera. The marbles are classified by decision tree method using the obtained properties. The user forms the training set by training the system at the marble classification stage. The system evolves itself in every marble image that is classified. The aim of the proposed system is to minimize the error caused by the person performing the classification and achieve it quickly.

Keywords: decision tree, feature extraction, k-means clustering, marble classification

Procedia PDF Downloads 379
27813 Characterization of Aerosol Droplet in Absorption Columns to Avoid Amine Emissions

Authors: Hammad Majeed, Hanna Knuutila, Magne Hilestad, Hallvard Svendsen

Abstract:

Formation of aerosols can cause serious complications in industrial exhaust gas CO2 capture processes. SO3 present in the flue gas can cause aerosol formation in an absorption based capture process. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. In absorption processes aerosols are generated by spontaneous condensation or desublimation processes in supersaturated gas phases. Undesired aerosol development may lead to amine emissions many times larger than what would be encountered in a mist free gas phase in PCCC development. It is thus of crucial importance to understand the formation and build-up of these aerosols in order to mitigate the problem.Rigorous modelling of aerosol dynamics leads to a system of partial differential equations. In order to understand mechanics of a particle entering an absorber an implementation of the model is created in Matlab. The model predicts the droplet size, the droplet internal variable profiles and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. The model comprises a set of mass transfer equations for transferring components and the essential diffusion reaction equations to describe the droplet internal profiles for all relevant constituents. Also included is heat transfer across the interface and inside the droplet. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and gives examples as to how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles. Results: As an example a droplet of initial size of 3 microns, initially containing a 5M MEA, solution is exposed to an atmosphere free of MEA. Composition of the gas phase and temperature is changing with respect to time throughout the absorber.

Keywords: amine solvents, emissions, global climate change, simulation and modelling, aerosol generation

Procedia PDF Downloads 260
27812 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 282
27811 Statistical Modeling of Local Area Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes

Authors: Jihad Daba, Jean-Pierre Dubois

Abstract:

Multi path fading noise degrades the performance of cellular communication, most notably in femto- and pico-cells in 3G and 4G systems. When the wireless channel consists of a small number of scattering paths, the statistics of fading noise is not analytically tractable and poses a serious challenge to developing closed canonical forms that can be analysed and used in the design of efficient and optimal receivers. In this context, noise is multiplicative and is referred to as stochastically local fading. In many analytical investigation of multiplicative noise, the exponential or Gamma statistics are invoked. More recent advances by the author of this paper have utilized a Poisson modulated and weighted generalized Laguerre polynomials with controlling parameters and uncorrelated noise assumptions. In this paper, we investigate the statistics of multi-diversity stochastically local area fading channel when the channel consists of randomly distributed Rayleigh and Rician scattering centers with a coherent specular Nakagami-distributed line of sight component and an underlying doubly stochastic Poisson process driven by a lognormal intensity. These combined statistics form a unifying triply stochastic filtered marked Poisson point process model.

Keywords: cellular communication, femto and pico-cells, stochastically local area fading channel, triply stochastic filtered marked Poisson point process

Procedia PDF Downloads 444
27810 Emotional Intelligence as Predictor of Academic Success among Third Year College Students of PIT

Authors: Sonia Arradaza-Pajaron

Abstract:

College students are expected to engage in an on-the-job training or internship for completion of a course requirement prior to graduation. In this scenario, they are exposed to the real world of work outside their training institution. To find out their readiness both emotionally and academically, this study has been conducted. A descriptive-correlational research design was employed and random sampling technique method was utilized among 265 randomly selected third year college students of PIT, SY 2014-15. A questionnaire on Emotional Intelligence (bearing the four components namely; emotional literacy, emotional quotient competence, values and beliefs and emotional quotient outcomes) was fielded to the respondents and GWA was extracted from the school automate. Data collected were statistically treated using percentage, weighted mean and Pearson-r for correlation. Results revealed that respondents’ emotional intelligence level is moderately high while their academic performance is good. A high significant relationship was found between the EI component; Emotional Literacy and their academic performance while only significant relationship was found between Emotional Quotient Outcomes and their academic performance. Therefore, if EI influences academic performance significantly when correlated, a possibility that their OJT performance can also be affected either positively or negatively. Thus, EI can be considered predictor of their academic and academic-related performance. Based on the result, it is then recommended that the institution would try to look deeply into the consideration of embedding emotional intelligence as part of the (especially on Emotional Literacy and Emotional Quotient Outcomes of the students) college curriculum. It can be done if the school shall have an effective Emotional Intelligence framework or program manned by qualified and competent teachers, guidance counselors in different colleges in its implementation.

Keywords: academic performance, emotional intelligence, college students, academic success

Procedia PDF Downloads 373
27809 Defect Classification of Hydrogen Fuel Pressure Vessels using Deep Learning

Authors: Dongju Kim, Youngjoo Suh, Hyojin Kim, Gyeongyeong Kim

Abstract:

Acoustic Emission Testing (AET) is widely used to test the structural integrity of an operational hydrogen storage container, and clustering algorithms are frequently used in pattern recognition methods to interpret AET results. However, the interpretation of AET results can vary from user to user as the tuning of the relevant parameters relies on the user's experience and knowledge of AET. Therefore, it is necessary to use a deep learning model to identify patterns in acoustic emission (AE) signal data that can be used to classify defects instead. In this paper, a deep learning-based model for classifying the types of defects in hydrogen storage tanks, using AE sensor waveforms, is proposed. As hydrogen storage tanks are commonly constructed using carbon fiber reinforced polymer composite (CFRP), a defect classification dataset is collected through a tensile test on a specimen of CFRP with an AE sensor attached. The performance of the classification model, using one-dimensional convolutional neural network (1-D CNN) and synthetic minority oversampling technique (SMOTE) data augmentation, achieved 91.09% accuracy for each defect. It is expected that the deep learning classification model in this paper, used with AET, will help in evaluating the operational safety of hydrogen storage containers.

Keywords: acoustic emission testing, carbon fiber reinforced polymer composite, one-dimensional convolutional neural network, smote data augmentation

Procedia PDF Downloads 88
27808 District Selection for Geotechnical Settlement Suitability Using GIS and Multi Criteria Decision Analysis: A Case Study in Denizli, Turkey

Authors: Erdal Akyol, Mutlu Alkan

Abstract:

Multi criteria decision analysis (MDCA) covers both data and experience. It is very common to solve the problems with many parameters and uncertainties. GIS supported solutions improve and speed up the decision process. Weighted grading as a MDCA method is employed for solving the geotechnical problems. In this study, geotechnical parameters namely soil type; SPT (N) blow number, shear wave velocity (Vs) and depth of underground water level (DUWL) have been engaged in MDCA and GIS. In terms of geotechnical aspects, the settlement suitability of the municipal area was analyzed by the method. MDCA results were compatible with the geotechnical observations and experience. The method can be employed in geotechnical oriented microzoning studies if the criteria are well evaluated.

Keywords: GIS, spatial analysis, multi criteria decision analysis, geotechnics

Procedia PDF Downloads 456
27807 Visualization of PM₂.₅ Time Series and Correlation Analysis of Cities in Bangladesh

Authors: Asif Zaman, Moinul Islam Zaber, Amin Ahsan Ali

Abstract:

In recent years of industrialization, the South Asian countries are being affected by air pollution due to a severe increase in fine particulate matter 2.5 (PM₂.₅). Among them, Bangladesh is one of the most polluting countries. In this paper, statistical analyses were conducted on the time series of PM₂.₅ from various districts in Bangladesh, mostly around Dhaka city. Research has been conducted on the dynamic interactions and relationships between PM₂.₅ concentrations in different zones. The study is conducted toward understanding the characteristics of PM₂.₅, such as spatial-temporal characterization, correlation of other contributors behind air pollution such as human activities, driving factors and environmental casualties. Clustering on the data gave an insight on the districts groups based on their AQI frequency as representative districts. Seasonality analysis on hourly and monthly frequency found higher concentration of fine particles in nighttime and winter season, respectively. Cross correlation analysis discovered a phenomenon of correlations among cities based on time-lagged series of air particle readings and visualization framework is developed for observing interaction in PM₂.₅ concentrations between cities. Significant time-lagged correlations were discovered between the PM₂.₅ time series in different city groups throughout the country by cross correlation analysis. Additionally, seasonal heatmaps depict that the pooled series correlations are less significant in warmer months, and among cities of greater geographic distance as well as time lag magnitude and direction of the best shifted correlated particulate matter time series among districts change seasonally. The geographic map visualization demonstrates spatial behaviour of air pollution among districts around Dhaka city and the significant effect of wind direction as the vital actor on correlated shifted time series. The visualization framework has multipurpose usage from gathering insight of general and seasonal air quality of Bangladesh to determining the pathway of regional transportation of air pollution.

Keywords: air quality, particles, cross correlation, seasonality

Procedia PDF Downloads 103
27806 Molecular Clustering and Velocity Increase in Converging-Diverging Nozzle in Molecular Dynamics Simulation

Authors: Jeoungsu Na, Jaehawn Lee, Changil Hong, Suhee Kim

Abstract:

A molecular dynamics simulation in a converging-diverging nozzle was performed to study molecular collisions and their influence to average flow velocity according to a variety of vacuum levels. The static pressures and the dynamic pressure exerted by the molecule collision on the selected walls were compared to figure out the intensity variances of the directional flows. With pressure differences constant between the entrance and the exit of the nozzle, the numerical experiment was performed for molecular velocities and directional flows. The result shows that the velocities increased at the nozzle exit as the vacuum level gets higher in that area because less molecular collisions.

Keywords: cavitation, molecular collision, nozzle, vacuum, velocity increase

Procedia PDF Downloads 428
27805 Deep Vision: A Robust Dominant Colour Extraction Framework for T-Shirts Based on Semantic Segmentation

Authors: Kishore Kumar R., Kaustav Sengupta, Shalini Sood Sehgal, Poornima Santhanam

Abstract:

Fashion is a human expression that is constantly changing. One of the prime factors that consistently influences fashion is the change in colour preferences. The role of colour in our everyday lives is very significant. It subconsciously explains a lot about one’s mindset and mood. Analyzing the colours by extracting them from the outfit images is a critical study to examine the individual’s/consumer behaviour. Several research works have been carried out on extracting colours from images, but to the best of our knowledge, there were no studies that extract colours to specific apparel and identify colour patterns geographically. This paper proposes a framework for accurately extracting colours from T-shirt images and predicting dominant colours geographically. The proposed method consists of two stages: first, a U-Net deep learning model is adopted to segment the T-shirts from the images. Second, the colours are extracted only from the T-shirt segments. The proposed method employs the iMaterialist (Fashion) 2019 dataset for the semantic segmentation task. The proposed framework also includes a mechanism for gathering data and analyzing India’s general colour preferences. From this research, it was observed that black and grey are the dominant colour in different regions of India. The proposed method can be adapted to study fashion’s evolving colour preferences.

Keywords: colour analysis in t-shirts, convolutional neural network, encoder-decoder, k-means clustering, semantic segmentation, U-Net model

Procedia PDF Downloads 109
27804 Geospatial Multi-Criteria Evaluation to Predict Landslide Hazard Potential in the Catchment of Lake Naivasha, Kenya

Authors: Abdel Rahman Khider Hassan

Abstract:

This paper describes a multi-criteria geospatial model for prediction of landslide hazard zonation (LHZ) for Lake Naivasha catchment (Kenya), based on spatial analysis of integrated datasets of location intrinsic parameters (slope stability factors) and external landslides triggering factors (natural and man-made factors). The intrinsic dataset included: lithology, geometry of slope (slope inclination, aspect, elevation, and curvature) and land use/land cover. The landslides triggering factors included: rainfall as the climatic factor, in addition to the destructive effects reflected by proximity of roads and drainage network to areas that are susceptible to landslides. No published study on landslides has been obtained for this area. Thus, digital datasets of the above spatial parameters were conveniently acquired, stored, manipulated and analyzed in a Geographical Information System (GIS) using a multi-criteria grid overlay technique (in ArcGIS 10.2.2 environment). Deduction of landslide hazard zonation is done by applying weights based on relative contribution of each parameter to the slope instability, and finally, the weighted parameters grids were overlaid together to generate a map of the potential landslide hazard zonation (LHZ) for the lake catchment. From the total surface of 3200 km² of the lake catchment, most of the region (78.7 %; 2518.4 km²) is susceptible to moderate landslide hazards, whilst about 13% (416 km²) is occurring under high hazards. Only 1.0% (32 km²) of the catchment is displaying very high landslide hazards, and the remaining area (7.3 %; 233.6 km²) displays low probability of landslide hazards. This result confirms the importance of steep slope angles, lithology, vegetation land cover and slope orientation (aspect) as the major determining factors of slope failures. The information provided by the produced map of landslide hazard zonation (LHZ) could lay the basis for decision making as well as mitigation and applications in avoiding potential losses caused by landslides in the Lake Naivasha catchment in the Kenya Highlands.

Keywords: decision making, geospatial, landslide, multi-criteria, Naivasha

Procedia PDF Downloads 197
27803 Coastal Vulnerability under Significant Sea Level Rise: Risk and Adaptation Measures for Mumbai

Authors: Malay Kumar Pramanik

Abstract:

Climate change induced sea level rise increases storm surge, erosion, and inundation, which are stirred by an intricate interplay of physical environmental components at the coastal region. The Mumbai coast is much vulnerable to accelerated regional sea level change due to its highly dense population, highly developed economy, and low topography. To determine the significant causes behind coastal vulnerability, this study analyzes four different iterations of CVI by incorporating the pixel-based differentially weighted rank values of the selected five geological (CVI5), three physical (CVI8 with including geological variables), and four socio-economic variables (CVI4). However, CVI5 and CVI8 results yielded broadly similar natures, but after including socio-economic variables (CVI4), the results CVI (CVI12) has been changed at Mumbai and Kurla coastal portion that indicates the study coastal areas are mostly sensible with socio-economic variables. Therefore, the results of CVI12 show that out of 274.1 km of coastline analyzed, 55.83 % of the coast is very low vulnerable, 60.91 % of the coast is moderately vulnerable while 50.75 % is very high vulnerable. Finding also admits that in the context of growing urban population and the increasing rate of economic activities, socio-economic variables are most important variable to use for validating and testing the CVI. Finally, some recommendations are presented for concerned decision makers and stakeholders to develop appropriate coastal management plans, nourishment projects and mitigation measures considering socio-economic variables.

Keywords: coastal vulnerability index, sea level change, Mumbai coast, geospatial approach, coastal management, climate change

Procedia PDF Downloads 129
27802 Estimation of Genetic Diversity in Sorghum Accessions Using Agro-Mophological and Nutritional Traits

Authors: Maletsema Alina Mofokeng, Nemera Shargie

Abstract:

Sorghum is one of the most important cereal crops grown as a source of calories for many people in tropics and sub-tropics of the world. Proper characterisation and evaluation of crop germplasm is an important component for effective management of genetic resources and their utilisation in the improvement of the crop through plant breeding. The objective of the study was to estimate the genetic diversity present in sorghum accessions grown in South Africa using agro-morphological traits and some nutritional contents. The experiment was carried out in Potchefstroom. Data were subjected to correlations, principal components analysis, and hierarchical clustering using GenStat statistical software. There were highly significance differences among the accessions based on agro-morphological and nutritional quality traits. Grain yield was highly positively correlated with panicle weight. Plant height was highly significantly correlated with internode length, leaf length, leaf number, stem diameter, the number of nodes and starch content. The Principal component analysis revealed three most important PCs with a total variation of 78.6%. The protein content ranged from 7.7 to 14.7%, and starch ranged from 58.52 to 80.44%. The accessions that had high protein and starch content were AS16cyc and MP4277. There was vast genetic diversity observed among the accessions assessed that can be used by plant breeders to improve yield and nutritional traits.

Keywords: accessions, genetic diversity, nutritional quality, sorghum

Procedia PDF Downloads 259
27801 Human Behavior Modeling in Video Surveillance of Conference Halls

Authors: Nour Charara, Hussein Charara, Omar Abou Khaled, Hani Abdallah, Elena Mugellini

Abstract:

In this paper, we present a human behavior modeling approach in videos scenes. This approach is used to model the normal behaviors in the conference halls. We exploited the Probabilistic Latent Semantic Analysis technique (PLSA), using the 'Bag-of-Terms' paradigm, as a tool for exploring video data to learn the model by grouping similar activities. Our term vocabulary consists of 3D spatio-temporal patch groups assigned by the direction of motion. Our video representation ensures the spatial information, the object trajectory, and the motion. The main importance of this approach is that it can be adapted to detect abnormal behaviors in order to ensure and enhance human security.

Keywords: activity modeling, clustering, PLSA, video representation

Procedia PDF Downloads 391
27800 Energy Efficient Firefly Algorithm in Wireless Sensor Network

Authors: Wafa’ Alsharafat, Khalid Batiha, Alaa Kassab

Abstract:

Wireless sensor network (WSN) is comprised of a huge number of small and cheap devices known as sensor nodes. Usually, these sensor nodes are massively and deployed randomly as in Ad-hoc over hostile and harsh environment to sense, collect and transmit data to the needed locations (i.e., base station). One of the main advantages of WSN is that the ability to work in unattended and scattered environments regardless the presence of humans such as remote active volcanoes environments or earthquakes. In WSN expanding network, lifetime is a major concern. Clustering technique is more important to maximize network lifetime. Nature-inspired algorithms are developed and optimized to find optimized solutions for various optimization problems. We proposed Energy Efficient Firefly Algorithm to improve network lifetime as long as possible.

Keywords: wireless network, SN, Firefly, energy efficiency

Procedia PDF Downloads 387
27799 The Extent of Virgin Olive-Oil Prices' Distribution Revealing the Behavior of Market Speculators

Authors: Fathi Abid, Bilel Kaffel

Abstract:

The olive tree, the olive harvest during winter season and the production of olive oil better known by professionals under the name of the crushing operation have interested institutional traders such as olive-oil offices and private companies such as food industry refining and extracting pomace olive oil as well as export-import public and private companies specializing in olive oil. The major problem facing producers of olive oil each winter campaign, contrary to what is expected, it is not whether the harvest will be good or not but whether the sale price will allow them to cover production costs and achieve a reasonable margin of profit or not. These questions are entirely legitimate if we judge by the importance of the issue and the heavy complexity of the uncertainty and competition made tougher by a high level of indebtedness and the experience and expertise of speculators and producers whose objectives are sometimes conflicting. The aim of this paper is to study the formation mechanism of olive oil prices in order to learn about speculators’ behavior and expectations in the market, how they contribute by their industry knowledge and their financial alliances and the size the financial challenge that may be involved for them to build private information hoses globally to take advantage. The methodology used in this paper is based on two stages, in the first stage we study econometrically the formation mechanisms of olive oil price in order to understand the market participant behavior by implementing ARMA, SARMA, GARCH and stochastic diffusion processes models, the second stage is devoted to prediction purposes, we use a combined wavelet- ANN approach. Our main findings indicate that olive oil market participants interact with each other in a way that they promote stylized facts formation. The unstable participant’s behaviors create the volatility clustering, non-linearity dependent and cyclicity phenomena. By imitating each other in some periods of the campaign, different participants contribute to the fat tails observed in the olive oil price distribution. The best prediction model for the olive oil price is based on a back propagation artificial neural network approach with input information based on wavelet decomposition and recent past history.

Keywords: olive oil price, stylized facts, ARMA model, SARMA model, GARCH model, combined wavelet-artificial neural network, continuous-time stochastic volatility mode

Procedia PDF Downloads 336
27798 Training the Competences for the 'Expert Teacher': A Framework of Skills for Teachers

Authors: Sofia Cramerotti, Angela Cattoni, Laura Biancato, Dario Ianes

Abstract:

The recognition of specific standards for new professionals, within the teaching profile, is a necessary process in order to foster an innovative school vision in accordance with the change that school is experiencing. In line with the reform of the national education and training system and with the National Training Plan for teachers, our Research and Development department developed a training project based on a framework (Syllabus) of skills that each 'Expert Teacher' should master in order to fulfill what the different specific profiles request. The syllabus is a fundamental tool for a training process consistent with the teaching profiles, both to guide the to-become teachers entering in service and to provide the in-service teachers with a system of evaluation and improvement of their skills. According to the national and international literature about professional standards for teachers, we aggregated the skills of the syllabus in three macro areas: (1) Area of professional skills related to the teacher profile and their continuous training; (2) area of teaching skills related to the school innovation; (3) area of organizing skills related to school participation for its improvement. The syllabus is a framework that identifies and describes the skills of the expert teacher in all of their roles. However, the various skills take on different importance in the different profiles involved in the school; some of those skills are determining a role, others could be secondary. Therefore, the characterization of the different profiles is represented by suitably weighted skills sets. In this way, the same skill could differently characterize each profile. In the future, we hope that the skills development and training for the teacher could evolve in a skills development and training for the whole school staff ('Expert Team'). In this perspective, the school will, therefore, benefit from a solid team, in which the skills of the various profiles are all properly developed and well represented.

Keywords: framework, skills, teachers, training

Procedia PDF Downloads 177
27797 Application of Analytic Hierarchy Process Model to Weight and Prioritize Challenges and Barriers to Strategic Approach

Authors: Mohammad Mehdi Mohebi, Nima Kazempour, Mohammad Naeim Kazempour

Abstract:

Strategic thinking enables managers to find out what factors are effective in achieving the desired goals and how these factors create value for the customer. Strategic thinking can be interpreted as a form of mental and inner strength in the manager, who by utilizing it, while considering the conditions of the environment and unstable global environment changes, takes decisions, and plans actions, and designs the strategy of his organization in today's changing and unsustainable business environment. Strategic thinking is very important in today’s business world, because without this thinking, the organization's efforts to achieve developed strategies will not be effective. In this study, through a detailed study of the challenges and barriers to strategic thinking that is carried out by various scholars and experts theoretically and experimentally, 7 major factors were identified. Then, based on these main factors of challenges and related elements, a tool in the form of a questionnaire was developed in order to determine their importance and priority from the perspective of strategic management experts. Using statistical tests the reliability and validity of this instrument, including its structural validity, has been examined and approved using factor analysis. These factors are weighted and prioritized using AHP techniques and the opinions of scholars and experts. Prioritization of barriers to strategic thinking include: lack of participatory management, lack of a systematic approach, difficulty in aligning the organization members, lack of incentive organizational culture, behavioural and internal barriers of managers, lack of key managers and lack of access to timely and accurate information.

Keywords: strategic thinking, challenges and barriers to strategic thinking, EN bank, AHP method

Procedia PDF Downloads 540
27796 Determining of Importance Level of Factors Affecting Job Selection with the Method of AHP

Authors: Nurullah Ekmekci, Ömer Akkaya, Kazım Karaboğa, Mahmut Tekin

Abstract:

Job selection is one of the most important decisions that affect their lives in the name of being more useful to themselves and the society. There are many criteria to consider in the job selection. The amount of criteria in the job selection makes it a multi-criteria decision-making (MCDM) problem. In this study; job selection has been discussed as multi-criteria decision-making problem and has been solved by Analytic Hierarchy Process (AHP), one of the multi-criteria decision making methods. A survey, contains 5 different job selection criteria (finding a job friendliness, salary status, job , social security, work in the community deems reputation and business of the degree of difficulty) within many job selection criteria and 4 different job alternative (being academician, working at the civil service, working at the private sector and working at in their own business), has been conducted to the students of Selcuk University Faculty of Economics and Administrative Sciences. As a result of pairwise comparisons, the highest weighted criteria in the job selection and the most coveted job preferences were identified.

Keywords: analytical hierarchy process, job selection, multi-criteria, decision making

Procedia PDF Downloads 395
27795 Using Nonhomogeneous Poisson Process with Compound Distribution to Price Catastrophe Options

Authors: Rong-Tsorng Wang

Abstract:

In this paper, we derive a pricing formula for catastrophe equity put options (or CatEPut) with non-homogeneous loss and approximated compound distributions. We assume that the loss claims arrival process is a nonhomogeneous Poisson process (NHPP) representing the clustering occurrences of loss claims, the size of loss claims is a sequence of independent and identically distributed random variables, and the accumulated loss distribution forms a compound distribution and is approximated by a heavy-tailed distribution. A numerical example is given to calibrate parameters, and we discuss how the value of CatEPut is affected by the changes of parameters in the pricing model we provided.

Keywords: catastrophe equity put options, compound distributions, nonhomogeneous Poisson process, pricing model

Procedia PDF Downloads 164
27794 Educational Tours as a Learning Tool to the Third Years Tourism Students of De La Salle University, Dasmarinas

Authors: Jackqueline Uy, Hannah Miriam Verano, Crysler Luis Verbo, Irene Gueco

Abstract:

Educational tours are part of the curriculum of the College of Tourism and Hospitality Management, De La Salle University-Dasmarinas. They are highly significant to the students, especially Tourism students. The purpose of this study was to determine how effective educational tours were as a learning tool using the Experiential Learning Theory by David Kolb. This study determined the demographic profile of the third year tourism students in terms of gender, section, educational tours joined, and monthly family income and lastly, this study determined if there is a significant difference between the demographic profile of the respondents and their assessment of educational tours as a learning tool. The researchers used a historical research design with the third-year students of the bachelor of science in tourism management as the population size and used a random sampling method. The researchers made a survey questionnaire and utilized statistical tools such as weighted mean, frequency distribution, percentage, standard deviation, T-test, and ANOVA. The result of the study answered the profile of the respondents such as the gender, section, educational tour/s joined, and family monthly income. The findings of the study showed that the 3rd year tourism management students strongly agree that educational tours are a highly effective learning tool in terms of active experimentation, concrete experience, reflective observation, and abstract conceptualisation based on the data gathered from the respondents.

Keywords: CTHM, educational tours, experiential learning theory, De La Salle University Dasmarinas, tourism

Procedia PDF Downloads 168
27793 Implementation of an IoT Sensor Data Collection and Analysis Library

Authors: Jihyun Song, Kyeongjoo Kim, Minsoo Lee

Abstract:

Due to the development of information technology and wireless Internet technology, various data are being generated in various fields. These data are advantageous in that they provide real-time information to the users themselves. However, when the data are accumulated and analyzed, more various information can be extracted. In addition, development and dissemination of boards such as Arduino and Raspberry Pie have made it possible to easily test various sensors, and it is possible to collect sensor data directly by using database application tools such as MySQL. These directly collected data can be used for various research and can be useful as data for data mining. However, there are many difficulties in using the board to collect data, and there are many difficulties in using it when the user is not a computer programmer, or when using it for the first time. Even if data are collected, lack of expert knowledge or experience may cause difficulties in data analysis and visualization. In this paper, we aim to construct a library for sensor data collection and analysis to overcome these problems.

Keywords: clustering, data mining, DBSCAN, k-means, k-medoids, sensor data

Procedia PDF Downloads 372
27792 Validity of Clinical Disease Activity Index (CDAI) to Evaluate the Disease Activity of Rheumatoid Arthritis Patients in Sri Lanka: A Prospective Follow up Study Based on Newly Diagnosed Patients

Authors: Keerthie Dissanayake, Chandrika Jayasinghe, Priyani Wanigasekara, Jayampathy Dissanayake, Ajith Sominanda

Abstract:

The routine use of Disease Activity Score-28 (DAS28) to assess the disease activity in rheumatoid arthritis (RA) is limited due to its dependency on laboratory investigations and the complex calculations involved. In contrast, the clinical disease activity index (CDAI) is simple to calculate, which makes the "treat to target" strategy for the management of RA more practical. We aimed to assess the validity of CDAI compared to DAS28 in RA patients in Sri Lanka. A total of 103 newly diagnosed RA patients were recruited, and their disease activity was calculated using DAS 28 and CDAI during the first visit to the clinic (0 months) and re-assessed at 4 and 9 months of the follow-up visits. The validity of the CDAI, compared to DAS 28, was evaluated. Patients had a female preponderance (6:1) and a short symptom duration (mean = 6.33 months). The construct validity of CDAI, as assessed by Cronbach's α test, was 0.868. Convergent validity was assessed by correlation and Kappa statistics. Strong positive correlations were observed between CDAI and DAS 28 at the baseline (0 months), 4, and 9 months of evaluation (Spearman's r = 0.9357, 0.9354, 0.9106, respectively). Moderate-good inter-rater agreements between the DAS-28 and CDAI were observed (Weighted kappa of 0.660, 0.519, and 0.741 at 0, 4, and 9 months respectively). Discriminant validity, as assessed by ROC curves at 0, 4th, and 9th months of the evaluation, showed the area under the curve (AUC) of 0.958, 0.985, and 0.914, respectively. The suggested cut-off points for different CDAI disease activity categories according to ROC curves were ≤ 2 (Remission), >2 to ≤ 5 (low), >5 to ≤ 18 (moderate), > 18 (high). These findings indicate that the CDAI has good concordance with DAS 28 in assessing the disease activity in RA patients in this study sample.

Keywords: rheumatoid arthritis, CDAI, disease activity, Sri Lanka, validation

Procedia PDF Downloads 148
27791 Efficient Subgoal Discovery for Hierarchical Reinforcement Learning Using Local Computations

Authors: Adrian Millea

Abstract:

In hierarchical reinforcement learning, one of the main issues encountered is the discovery of subgoal states or options (which are policies reaching subgoal states) by partitioning the environment in a meaningful way. This partitioning usually requires an expensive global clustering operation or eigendecomposition of the Laplacian of the states graph. We propose a local solution to this issue, much more efficient than algorithms using global information, which successfully discovers subgoal states by computing a simple function, which we call heterogeneity for each state as a function of its neighbors. Moreover, we construct a value function using the difference in heterogeneity from one step to the next, as reward, such that we are able to explore the state space much more efficiently than say epsilon-greedy. The same principle can then be applied to higher level of the hierarchy, where now states are subgoals discovered at the level below.

Keywords: exploration, hierarchical reinforcement learning, locality, options, value functions

Procedia PDF Downloads 169
27790 Design of Two-Channel Quadrature Mirror Filter Banks Using a Transformation Approach

Authors: Ju-Hong Lee, Yi-Lin Shieh

Abstract:

Two-dimensional (2-D) quadrature mirror filter (QMF) banks have been widely considered for high-quality coding of image and video data at low bit rates. Without implementing subband coding, a 2-D QMF bank is required to have an exactly linear-phase response without magnitude distortion, i.e., the perfect reconstruction (PR) characteristics. The design problem of 2-D QMF banks with the PR characteristics has been considered in the literature for many years. This paper presents a transformation approach for designing 2-D two-channel QMF banks. Under a suitable one-dimensional (1-D) to two-dimensional (2-D) transformation with a specified decimation/interpolation matrix, the analysis and synthesis filters of the QMF bank are composed of 1-D causal and stable digital allpass filters (DAFs) and possess the 2-D doubly complementary half-band (DC-HB) property. This facilitates the design problem of the two-channel QMF banks by finding the real coefficients of the 1-D recursive DAFs. The design problem is formulated based on the minimax phase approximation for the 1-D DAFs. A novel objective function is then derived to obtain an optimization for 1-D minimax phase approximation. As a result, the problem of minimizing the objective function can be simply solved by using the well-known weighted least-squares (WLS) algorithm in the minimax (L∞) optimal sense. The novelty of the proposed design method is that the design procedure is very simple and the designed 2-D QMF bank achieves perfect magnitude response and possesses satisfactory phase response. Simulation results show that the proposed design method provides much better design performance and much less design complexity as compared with the existing techniques.

Keywords: Quincunx QMF bank, doubly complementary filter, digital allpass filter, WLS algorithm

Procedia PDF Downloads 223
27789 Development of an Implicit Physical Influence Upwind Scheme for Cell-Centered Finite Volume Method

Authors: Shidvash Vakilipour, Masoud Mohammadi, Rouzbeh Riazi, Scott Ormiston, Kimia Amiri, Sahar Barati

Abstract:

An essential component of a finite volume method (FVM) is the advection scheme that estimates values on the cell faces based on the calculated values on the nodes or cell centers. The most widely used advection schemes are upwind schemes. These schemes have been developed in FVM on different kinds of structured and unstructured grids. In this research, the physical influence scheme (PIS) is developed for a cell-centered FVM that uses an implicit coupled solver. Results are compared with the exponential differencing scheme (EDS) and the skew upwind differencing scheme (SUDS). Accuracy of these schemes is evaluated for a lid-driven cavity flow at Re = 1000, 3200, and 5000 and a backward-facing step flow at Re = 800. Simulations show considerable differences between the results of EDS scheme with benchmarks, especially for the lid-driven cavity flow at high Reynolds numbers. These differences occur due to false diffusion. Comparing SUDS and PIS schemes shows relatively close results for the backward-facing step flow and different results in lid-driven cavity flow. The poor results of SUDS in the lid-driven cavity flow can be related to its lack of sensitivity to the pressure difference between cell face and upwind points, which is critical for the prediction of such vortex dominant flows.

Keywords: cell-centered finite volume method, coupled solver, exponential differencing scheme (EDS), physical influence scheme (PIS), pressure weighted interpolation method (PWIM), skew upwind differencing scheme (SUDS)

Procedia PDF Downloads 277
27788 A Product-Specific/Unobservable Approach to Segmentation for a Value Expressive Credit Card Service

Authors: Manfred F. Maute, Olga Naumenko, Raymond T. Kong

Abstract:

Using data from a nationally representative financial panel of Canadian households, this study develops a psychographic segmentation of the customers of a value-expressive credit card service and tests for effects on relational response differences. The variety of segments elicited by agglomerative and k means clustering and the familiar profiles of individual clusters suggest that the face validity of the psychographic segmentation was quite high. Segmentation had a significant effect on customer satisfaction and relationship depth. However, when socio-demographic characteristics like household size and income were accounted for in the psychographic segmentation, the effect on relational response differences was magnified threefold. Implications for the segmentation of financial services markets are considered.

Keywords: customer satisfaction, financial services, psychographics, response differences, segmentation

Procedia PDF Downloads 328
27787 Multidimensional Item Response Theory Models for Practical Application in Large Tests Designed to Measure Multiple Constructs

Authors: Maria Fernanda Ordoñez Martinez, Alvaro Mauricio Montenegro

Abstract:

This work presents a statistical methodology for measuring and founding constructs in Latent Semantic Analysis. This approach uses the qualities of Factor Analysis in binary data with interpretations present on Item Response Theory. More precisely, we propose initially reducing dimensionality with specific use of Principal Component Analysis for the linguistic data and then, producing axes of groups made from a clustering analysis of the semantic data. This approach allows the user to give meaning to previous clusters and found the real latent structure presented by data. The methodology is applied in a set of real semantic data presenting impressive results for the coherence, speed and precision.

Keywords: semantic analysis, factorial analysis, dimension reduction, penalized logistic regression

Procedia PDF Downloads 439
27786 Localization of Geospatial Events and Hoax Prediction in the UFO Database

Authors: Harish Krishnamurthy, Anna Lafontant, Ren Yi

Abstract:

Unidentified Flying Objects (UFOs) have been an interesting topic for most enthusiasts and hence people all over the United States report such findings online at the National UFO Report Center (NUFORC). Some of these reports are a hoax and among those that seem legitimate, our task is not to establish that these events confirm that they indeed are events related to flying objects from aliens in outer space. Rather, we intend to identify if the report was a hoax as was identified by the UFO database team with their existing curation criterion. However, the database provides a wealth of information that can be exploited to provide various analyses and insights such as social reporting, identifying real-time spatial events and much more. We perform analysis to localize these time-series geospatial events and correlate with known real-time events. This paper does not confirm any legitimacy of alien activity, but rather attempts to gather information from likely legitimate reports of UFOs by studying the online reports. These events happen in geospatial clusters and also are time-based. We look at cluster density and data visualization to search the space of various cluster realizations to decide best probable clusters that provide us information about the proximity of such activity. A random forest classifier is also presented that is used to identify true events and hoax events, using the best possible features available such as region, week, time-period and duration. Lastly, we show the performance of the scheme on various days and correlate with real-time events where one of the UFO reports strongly correlates to a missile test conducted in the United States.

Keywords: time-series clustering, feature extraction, hoax prediction, geospatial events

Procedia PDF Downloads 372