Search results for: weighted approximation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1040

Search results for: weighted approximation

230 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

Authors: Joonas Pääkkönen

Abstract:

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling

Procedia PDF Downloads 106
229 Design and Analysis of Semi-Active Isolation System in Low Frequency Excitation Region for Vehicle Seat to Reduce Discomfort

Authors: Andrea Tonoli, Nicola Amati, Maria Cavatorta, Reza Mirsanei, Behzad Mozaffari, Hamed Ahani, Akbar Karamihafshejani, Mohammad Ghazivakili, Mohammad Abuabiah

Abstract:

The vibrations transmitted to the drivers and passengers through vehicle seat seriously effect on the level of their attention, fatigue and physical health and reduce the comfort and efficiency of the occupants. Recently, some researchers have focused on vibrations at low excitation frequency(0.5-5 Hz) which are considered to be the main risk factors for lumbar part of the backbone but they were not applicable to A and B-segment cars regarding to the size and weight. A semi-active system with two symmetric negative stiffness structures (NSS) in parallel to a positive stiffness structure and actuators has been proposed to attenuate low frequency excitation and makes system flexible regarding to different weight of passengers which is applicable for A and B-Segment cars. Here, the 3 degree of freedom system is considered, dynamic equation clearly is presented, then simulated in MATLAB in order to analysis of performance of the system. The design procedure is derived so that the resonance peak of frequency–response curve shift to the left, the isolating range is increased and especially, the peak of the frequency–response curve is minimized. According to ISO standard different class of road profile as an input is applied to the system to evaluate the performance of the system. To evaluate comfort issues, we extract the RMS value of the vertical acceleration acting on the passenger's body. Then apply the band-pass filter, which takes into account the human sensitivity to acceleration. According to ISO, this weighted acceleration is lower than 0.315 m/s^2, so the ride is considered as comfortable.

Keywords: low frequency excitation, negative stiffness, seat vehicle, vibration isolation

Procedia PDF Downloads 415
228 Identification of Groundwater Potential Zones Using Geographic Information System and Multi-Criteria Decision Analysis: A Case Study in Bagmati River Basin

Authors: Hritik Bhattarai, Vivek Dumre, Ananya Neupane, Poonam Koirala, Anjali Singh

Abstract:

The availability of clean and reliable groundwater is essential for the sustainment of human and environmental health. Groundwater is a crucial resource that contributes significantly to the total annual supply. However, over-exploitation has depleted groundwater availability considerably and led to some land subsidence. Determining the potential zone of groundwater is vital for protecting water quality and managing groundwater systems. Groundwater potential zones are marked with the assistance of Geographic Information System techniques. During the study, a standard methodology was proposed to determine groundwater potential using an integration of GIS and AHP techniques. When choosing the prospective groundwater zone, accurate information was generated to get parameters such as geology, slope, soil, temperature, rainfall, drainage density, and lineament density. However, identifying and mapping potential groundwater zones remains challenging due to aquifer systems' complex and dynamic nature. Then, ArcGIS was incorporated with a weighted overlay, and appropriate ranks were assigned to each parameter group. Through data analysis, MCDA was applied to weigh and prioritize the different parameters based on their relative impact on groundwater potential. There were three probable groundwater zones: low potential, moderate potential, and high potential. Our analysis showed that the central and lower parts of the Bagmati River Basin have the highest potential, i.e., 7.20% of the total area. In contrast, the northern and eastern parts have lower potential. The identified potential zones can be used to guide future groundwater exploration and management strategies in the region.

Keywords: groundwater, geographic information system, analytic hierarchy processes, multi-criteria decision analysis, Bagmati

Procedia PDF Downloads 80
227 Select Communicative Approaches and Speaking Skills of Junior High School Students

Authors: Sonia Arradaza-Pajaron

Abstract:

Speaking English, as a medium of instruction among students who are non-native English speakers poses a real challenge to achieve proficiency, especially so if it is a requirement in most communicative classroom instruction. It becomes a real burden among students whose English language orientation is not well facilitated and encouraged by teachers among national high schools. This study, which utilized a descriptive-correlational research, examined the relationship between the select communicative approaches commonly utilized in classroom instruction to the level of speaking skills among the identified high school students. Survey questionnaires, interview, and observations sheets were researcher instruments used to generate salient information. Data were analyzed and treated statistically utilizing weighted mean speaking skills levels and Pearson r to determine the relationship between the two identified variables of the study. Findings revealed that the level of English speaking skills of the high school students is just average. Further, among the identified speaking sub-skills, namely, grammar, pronunciation and fluency, the students were considered above average level. There was also a clear relationship of some communicative approaches to the respondents’ speaking skills. Most notable among the select approaches is that of role-playing, compared to storytelling, informal debate, brainstorming, oral reporting, and others. It may be because role-playing is the most commonly used approach in the classroom. This implies that when these high school students are given enough time and autonomy on how they could express their ideas or comprehension of some lessons, they are shown to have a spontaneous manner of expression, through the maximization of the second language. It can be concluded further that high school students have the capacity to express ideas even in the second language, only if they are encouraged and well-facilitated by teachers. Also, when a better communicative approach is identified and better implemented, thus, will level up students’ classroom engagement.

Keywords: communicative approaches, comprehension, role playing, speaking skills

Procedia PDF Downloads 156
226 Finite Deformation of a Dielectric Elastomeric Spherical Shell Based on a New Nonlinear Electroelastic Constitutive Theory

Authors: Odunayo Olawuyi Fadodun

Abstract:

Dielectric elastomers (DEs) are a type of intelligent materials with salient features like electromechanical coupling, lightweight, fast actuation speed, low cost and high energy density that make them good candidates for numerous engineering applications. This paper adopts a new nonlinear electroelastic constitutive theory to examine radial deformation of a pressurized thick-walled spherical shell of soft dielectric material with compliant electrodes on its inner and outer surfaces. A general formular for the internal pressure, which depends on the deformation and a potential difference between boundary electrodes or uniform surface charge distributions, is obtained in terms of special function. To illustrate the effects of an applied electric field on the mechanical behaviour of the shell, three different energy functions with distinct mechanical properties are employed for numerical purposes. The observed behaviour of the shells is preserved in the presence of an applied electric field, and the influence of the field due to a potential difference declines more slowly with the increasing deformation to that produced by a surface charge. Counterpart results are then presented for the thin-walled shell approximation as a limiting case of a thick-walled shell without restriction on the energy density. In the absence of internal pressure, it is obtained that inflation is caused by the application of an electric field. The resulting numerical solutions of the theory presented in this work are in agreement with those predicted by the generally adopted Dorfmann and Ogden model.

Keywords: constitutive theory, elastic dielectric, electroelasticity, finite deformation, nonlinear response, spherical shell

Procedia PDF Downloads 57
225 Optimisation of Metrological Inspection of a Developmental Aeroengine Disc

Authors: Suneel Kumar, Nanda Kumar J. Sreelal Sreedhar, Suchibrata Sen, V. Muralidharan,

Abstract:

Fan technology is very critical and crucial for any aero engine technology. The fan disc forms a critical part of the fan module. It is an airworthiness requirement to have a metrological qualified quality disc. The current study uses a tactile probing and scanning on an articulated measuring machine (AMM), a bridge type coordinate measuring machine (CMM) and Metrology software for intermediate and final dimensional and geometrical verification during the prototype development of the disc manufactured through forging and machining process. The circumferential dovetails manufactured through the milling process are evaluated based on the evaluated and analysed metrological process. To perform metrological optimization a change of philosophy is needed making quality measurements available as fast as possible to improve process knowledge and accelerate the process but with accuracy, precise and traceable measurements. The offline CMM programming for inspection and optimisation of the CMM inspection plan are crucial portions of the study and discussed. The dimensional measurement plan as per the ASME B 89.7.2 standard to reach an optimised CMM measurement plan and strategy are an important requirement. The probing strategy, stylus configuration, and approximation strategy effects on the measurements of circumferential dovetail measurements of the developmental prototype disc are discussed. The results were discussed in the form of enhancement of the R &R (repeatability and reproducibility) values with uncertainty levels within the desired limits. The findings from the measurement strategy adopted for disc dovetail evaluation and inspection time optimisation are discussed with the help of various analyses and graphical outputs obtained from the verification process.

Keywords: coordinate measuring machine, CMM, aero engine, articulated measuring machine, fan disc

Procedia PDF Downloads 87
224 Qualitative and Quantitative Methods in Multidisciplinary Fields Collection Development

Authors: Hui Wang

Abstract:

Traditional collection building approaches are limited in breadth and scope and are not necessarily suitable for multidisciplinary fields development in the institutes of the Chinese Academy of Sciences. The increasing of multidisciplinary fields researches require a viable approach to collection development in these libraries. This study uses qualitative and quantitative analysis to assess collection. The quantitative analysis consists of three levels of evaluation, which including realistic demand, potential demand and trend demand analysis. For one institute, three samples were separately selected from the object institute, more than one international top institutes in highly relative research fields and future research hotspots. Each sample contains an appropriate number of papers published in recent five years. Several keywords and the organization names were reasonably combined to search in commercial databases and the institutional repositories. The publishing information and citations in the bibliographies of these papers were selected to build the dataset. One weighted evaluation model and citation analysis were used to calculate the demand intensity index of every journal and book. Principal Investigator selector and database traffic provide a qualitative evidence to describe the demand frequency. The demand intensity, demand frequency and academic committee recommendations were comprehensively considered to recommend collection development. The collection gaps or weaknesses were ascertained by comparing the current collection and the recommend collection. This approach was applied in more than 80 institutes’ libraries in Chinese Academy of Sciences in the past three years. The evaluation results provided an important evidence for collections building in the second year. The latest user survey results showed that the updated collection’s capacity to support research in a multidisciplinary subject area have increased significantly.

Keywords: citation analysis, collection assessment, collection development, quantitative analysis

Procedia PDF Downloads 189
223 Using Short Learning Programmes to Develop Students’ Digital Literacies in Art and Design Education

Authors: B.J. Khoza, B. Kembo

Abstract:

Global socioeconomic developments and ever-growing technological advancements of the art and design industry indicate the pivotal importance of lifelong learning. There exists a discrepancy between competencies, personal ambition, and workplace requirements. There are few , if at all, institutions of higher learning in South Africa which offer Short Learning Programmes (SLP) in Art and Design Education. Traditionally, Art and Design education is delivered face to face via a hands-on approach. In this way the enduring perception among educators is that art and design education does not lend itself to online delivery. Short Learning programmes (SLP) are a concentrated approach to make revenue and lure potential prospective students to embark on further education study, this is often of weighted value to both students and employers. SLPs are used by Higher Education institutions to generate income in support of the core academic programmes. However, there is a gap in terms of the translation of art and design studio pedagogy into SLPs which provide quality education, are adaptable and delivered via a blended mode. In our paper, we propose a conceptual framework drawing on secondary research to analyse existing research to SLPs for arts and design education. We aim to indicate a new dimension to the process of using a design-based research approach for short learning programmes in art and design education. The study draws on a conceptual framework, a qualitative analysis through the lenses of Herrington, McKenney, Reeves and Oliver (2005) principles of the design-based research approach. The results of this study indicate that design-based research is not only an effective methodological approach for developing and deploying arts and design education curriculum for 1st years in Higher Education context but it also has the potential to guide future research. The findings of this study propose that the design-based research approach could bring theory and praxis together regarding a common purpose to design context-based solutions to educational problems.

Keywords: design education, design-based research, digital literacies, multi-literacies, short learning programme

Procedia PDF Downloads 136
222 Advanced Analysis on Dissemination of Pollutant Caused by Flaring System Effect Using Computational Fluid Dynamics (CFD) Fluent Model with WRF Model Input in Transition Season

Authors: Benedictus Asriparusa

Abstract:

In the area of the oil industry, there is accompanied by associated natural gas. The thing shows that a large amount of energy is being wasted mostly in the developing countries by contributing to the global warming process. This research represents an overview of methods in Minas area employed by these researchers in PT. Chevron Pacific Indonesia to determine ways of measuring and reducing gas flaring and its emission drastically. It provides an approximation includes analytical studies, numerical studies, modeling, computer simulations, etc. Flaring system is the controlled burning of natural gas in the course of routine oil and gas production operations. This burning occurs at the end of a flare stack or boom. The combustion process will release emissions of greenhouse gases such as NO2, CO2, SO2, etc. This condition will affect the air and environment around the industrial area. Therefore, we need a simulation to create the pattern of the dissemination of pollutant. This research paper has being made to see trends in gas flaring model and current developments to predict dominant variable which gives impact to dissemination of pollutant. Fluent models used to simulate the distribution of pollutant gas coming out of the stack. While WRF model output is used to overcome the limitations of the analysis of meteorological data and atmospheric conditions in the study area. This study condition focused on transition season in 2012 at Minas area. The goal of the simulation is looking for the exact time which is most influence towards dissemination of pollutants. The most influence factor divided into two main subjects. It is the quickest wind and the slowest wind. According to the simulation results, it can be seen that quickest wind moves to horizontal way and slowest wind moves to vertical way.

Keywords: flaring system, fluent model, dissemination of pollutant, transition season

Procedia PDF Downloads 354
221 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model

Authors: Donatella Giuliani

Abstract:

In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.

Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation

Procedia PDF Downloads 195
220 Features of the Functional and Spatial Organization of Railway Hubs as a Part of the Urban Nodal Area

Authors: Khayrullina Yulia Sergeevna, Tokareva Goulsine Shavkatovna

Abstract:

The article analyzes the modern major railway hubs as a main part of the Urban Nodal Area (UNA). The term was introduced into the theory of urban planning at the end of the XX century. Tokareva G.S. jointly with Gutnov A.E. investigated the structure-forming elements of the city. UNA is the basic unit, the "cell" of the city structure. Specialization is depending on the position in the frame or the fabric of the city. This is related to feature of its organization. Spatial and functional features of UNA proposed to investigate in this paper. The base object for researching are railway hubs as connective nodes of inner and extern-city communications. Research used a stratified sampling type with the selection of typical objects. Research is being conducted on the 14 railway hubs of the native and foreign experience of the largest cities with a population over 1 million people located in one and close to the Russian climate zones. Features of the organization identified in the complex research of functional and spatial characteristics based on the hypothesis of the existence of dual characteristics of the organization of urban nodes. According to the analysis, there is using the approximation method that enable general conclusions of a representative selection of the entire population of railway hubs and it development’s area. Results of the research show specific ratio of functional and spatial organization of UNA based on railway hubs. Based on it there proposed typology of spaces and urban nodal areas. Identification of spatial diversity and functional organization’s features of the greatest railway hubs and it development’s area gives an indication of the different evolutionary stages of formation approaches. It help to identify new patterns for the complex and effective design as a prediction of the native hub’s development direction.

Keywords: urban nodal area, railway hubs, features of structural, functional organization

Procedia PDF Downloads 368
219 Mapping Social and Natural Hazards: A Survey of Potential for Managed Retreat in the United States

Authors: Karim Ahmed

Abstract:

The purpose of this study was to investigate how factoring the impact of natural disasters beyond flooding would affect managed retreat policy eligibility in the United States. For the study design, a correlation analysis method compared weighted measures of flooding and other natural disasters (e.g., wildfires, tornadoes, heatwaves, etc.) to CBSA Populated areas, the prevalence of cropland, and relative poverty on a county level. The study found that the vast majority of CBSAs eligible for managed retreat programs under a policy inclusive of non-flooding events would have already been covered by flood-only managed retreat policies. However, it is noteworthy that a majority of those counties that are not covered by a flood-only managed retreat policy have high rates of poverty and are either heavily populated and/or agriculturally active. The correlation is particularly strong between counties that are subject to multiple natural hazards and those that have both high rates of relative poverty and cropland prevalence. There is currently no managed retreat policy for agricultural land in the United States despite the environmental implications and food supply chain vulnerabilities related to at-risk cropland. The findings of this study suggest both that such a policy should be created and, when it is, that special attention should be paid to non-flood natural disasters affecting agricultural areas. These findings also reveal that, while current flood-based policies in the United States serve many areas that do need access to managed retreat funding and implementation, other vulnerable areas are overlooked by this approach. These areas are often deeply impoverished and are therefore particularly vulnerable to natural disaster; if and when those disasters do occur, these areas are often less financially prepared to recover or retreat from the disaster’s advance and, due to the limitations of the current policies discussed above, are less able to take the precautionary measures necessary to mitigate their risk.

Keywords: flood, hazard, land use, managed retreat, wildfire

Procedia PDF Downloads 103
218 The Effect of Adhesion on the Frictional Hysteresis Loops at a Rough Interface

Authors: M. Bazrafshan, M. B. de Rooij, D. J. Schipper

Abstract:

Frictional hysteresis is the phenomenon in which mechanical contacts are subject to small (compared to contact area) oscillating tangential displacements. In the presence of adhesion at the interface, the contact repulsive force increases leading to a higher static friction force and pre-sliding displacement. This paper proposes a boundary element model (BEM) for the adhesive frictional hysteresis contact at the interface of two contacting bodies of arbitrary geometries. In this model, adhesion is represented by means of a Dugdale approximation of the total work of adhesion at local areas with a very small gap between the two bodies. The frictional contact is divided into sticking and slipping regions in order to take into account the transition from stick to slip (pre-sliding regime). In the pre-sliding regime, the stick and slip regions are defined based on the local values of shear stress and normal pressure. In the studied cases, a fixed normal force is applied to the interface and the friction force varies in such a way to start gross sliding in one direction reciprocally. For the first case, the problem is solved at the smooth interface between a ball and a flat for different values of work of adhesion. It is shown that as the work of adhesion increases, both static friction and pre-sliding distance increase due to the increase in the contact repulsive force. For the second case, the rough interface between a glass ball against a silicon wafer and a DLC (Diamond-Like Carbon) coating is considered. The work of adhesion is assumed to be identical for both interfaces. As adhesion depends on the interface roughness, the corresponding contact repulsive force is different for these interfaces. For the smoother interface, a larger contact repulsive force and consequently, a larger static friction force and pre-sliding distance are observed.

Keywords: boundary element model, frictional hysteresis, adhesion, roughness, pre-sliding

Procedia PDF Downloads 149
217 Analysis of Thermal Effect on Functionally Graded Micro-Beam via Mixed Finite Element Method

Authors: Cagri Mollamahmutoglu, Ali Mercan, Aykut Levent

Abstract:

Studies concerning the microstructures are becoming more important as the utilization of various micro-electro mechanical systems (MEMS) are increasing. Thus in recent years, thermal buckling and vibration analysis of microstructures have been subject to many investigations that are utilizing different numerical methods. In this study, thermal effects on mechanical response of a functionally graded (FG) Timoshenko micro-beam are presented in the framework of a mixed finite element formulation. Size effects are taken into consideration via modified couple stress theory. The mixed formulation is based on a function which in turn is derived via Gateaux Differential scientifically. After the resolution of all field equations of the beam, a potential operator is carefully constructed. Then this operator is used for the manufacturing of the functional. Usual procedures of finite element approximation are utilized for the derivation of the mixed finite element equations once the potential is obtained. Resulting finite element formulation allows usage of C₀ type simple linear shape functions and avoids shear-locking phenomena, which is a common shortcoming of the displacement-based formulations of moderately thick beams. The developed numerical scheme is used to obtain the effects of thermal loads on the static bending, free vibration and buckling of FG Timoshenko micro-beams for different power-law parameters, aspect ratios and boundary conditions. The versatility of the mixed formulation is presented over other numerical methods such as generalized differential quadrature method (GDQM). Another attractive property of the formulation is that it allows direct calculation of the contribution of micro effects on the overall mechanical response.

Keywords: micro-beam, functionally graded materials, thermal effect, mixed finite element method

Procedia PDF Downloads 108
216 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique

Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki

Abstract:

Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.

Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector

Procedia PDF Downloads 310
215 Data Mining Spatial: Unsupervised Classification of Geographic Data

Authors: Chahrazed Zouaoui

Abstract:

In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.

Keywords: mining, GIS, geo-clustering, neighborhood

Procedia PDF Downloads 359
214 Land Suitability Assessment for Vineyards in Afghanistan Based on Physical and Socio-Economic Criteria

Authors: Sara Tokhi Arab, Tariq Salari, Ryozo Noguchi, Tofael Ahamed

Abstract:

Land suitability analysis is essential for table grape cultivation in order to increase its production and productivity under the dry condition of Afghanistan. In this context, the main aim of this paper was to determine the suitable locations for vineyards based on satellite remote sensing and GIS (geographical information system) in Kabul Province of Afghanistan. The Landsat8 OLI (operational land imager) and thermal infrared sensor (TIRS) and shuttle radar topography mission digital elevation model (SRTM DEM) images were processed to obtain the normalized difference vegetation index (NDVI), normalized difference moisture index (NDMI), land surface temperature (LST), and topographic criteria (elevation, aspect, and slope). Moreover, Jaxa rainfall (mm per hour), soil properties information are also used for the physical suitability of vineyards. Besides, socio-economic criteria were collected through field surveys from Kabul Province in order to develop the socio-economic suitability map. Finally, the suitable classes were determined using weighted overly based on a reclassification of each criterion based on AHP (Analytical Hierarchy Process) weights. The results indicated that only 11.1% of areas were highly suitable, 24.8% were moderately suitable, 35.7% were marginally suitable and 28.4% were not physically suitable for grapes production. However, 15.7% were highly suitable, 17.6% were moderately suitable, 28.4% were marginally suitable and 38.3% were not socio-economically suitable for table grapes production in Kabul Province. This research could help decision-makers, growers, and other stakeholders with conducting precise land assessments by identifying the main limiting factors for the production of table grapes management and able to increase land productivity more precisely.

Keywords: vineyards, land physical suitability, socio-economic suitability, AHP

Procedia PDF Downloads 149
213 The Effectiveness of Incidental Physical Activity Interventions Compared to Other Interventions in the Management of People with Low Back Pain: A Systematic Review and Meta-Analysis

Authors: Hosam Alzahrani, Martin Mackey, Emmanuel Stamatakis, Marina B. Pinheiro, Manuela Wicks, Debra Shirley

Abstract:

Objective: To investigate the effectiveness of incidental (non-structured) physical activity interventions compared with other commonly prescribed interventions for the management of people with low back pain (LBP). Methods: We performed a systematic review with meta-analyses of eligible randomized controlled trials obtained by searching Medline, Scopus, CINAHL, EMBASE, and CENTRAL. This review considered trials investigating the effect of incidental physical activity interventions compared to other interventions in people aged 18 years or over, diagnosed with non-specific LBP. Analyses were conducted separately for short-term (≤3 months), intermediate-term (> 3 and < 12 months), and long-term (≥ 12 months), for each outcome. The analyses were conducted using the weighted mean difference (WMD). The overall quality of evidence was assessed using the GRADE system. Meta-analyses were only performed for pain and disability outcomes as there was insufficient data on the other outcomes. Results: For pain, the pooled results did not show any significant effects between the incidental physical activity intervention and other interventions at any time point. For disability, incidental physical activity was not statistically more effective than other interventions at short-term; however, the pooled results favored incidental physical activity at intermediate-term (WMD= -6.05, 95% CI: -10.39 to -1.71, p=0.006) and long-term (WMD= -6.40 95% CI: -11.68 to -1.12, p=0.02) follow-ups among participants with chronic LBP. The overall quality of evidence was rated “moderate quality” based on the GRADE system. Conclusion: The incidental physical activity intervention provided intermediate and long disability relief for people with chronic LBP, although this improvement was small and not likely to be clinically important.

Keywords: physical activity, incidental, low back pain, systematic review, meta-analysis

Procedia PDF Downloads 134
212 A Critical Review and Bibliometric Analysis on Measures of Achievement Motivation

Authors: Kanupriya Rawat, Aleksandra Błachnio, Paweł Izdebski

Abstract:

Achievement motivation, which drives a person to strive for success, is an important construct in sports psychology. This systematic review aims to analyze the methods of measuring achievement motivation used in previous studies published over the past four decades and to find out which method of measuring achievement motivation is the most prevalent and the most effective by thoroughly examining measures of achievement motivation used in each study and by evaluating most highly cited achievement motivation measures in sport. In order to understand this latent construct, thorough measurement is necessary, hence a critical evaluation of measurement tools is required. The literature search was conducted in the following databases: EBSCO, MEDLINE, APA PsychARTICLES, Academic Search Ultimate, Open Dissertations, ERIC, Science direct, Web of Science, as well as Wiley Online Library. A total of 26 articles met the inclusion criteria and were selected. From this review, it was found that the Achievement Goal Questionnaire- Sport (AGQ-Sport) and the Task and Ego Orientation in Sport Questionnaire (TEOSQ) were used in most of the research, however, the average weighted impact factor of the Achievement Goal Questionnaire- Sport (AGQ-Sport) is the second highest and most relevant in terms of research articles related to the sport psychology discipline. Task and Ego Orientation in Sport Questionnaire (TEOSQ) is highly popular in cross-cultural adaptation but has the second last average IF among other scales due to the less impact factor of most of the publishing journals. All measures of achievement motivation have Cronbach’s alpha value of more than .70, which is acceptable. The advantages and limitations of each measurement tool are discussed, and the distinction between using implicit and explicit measures of achievement motivation is explained. Overall, both implicit and explicit measures of achievement motivation have different conceptualizations of achievement motivation and are applicable at either the contextual or situational level. The conceptualization and degree of applicability are perhaps the most crucial factors for researchers choosing a questionnaire, even though they differ in their development, reliability, and use.

Keywords: achievement motivation, task and ego orientation, sports psychology, measures of achievement motivation

Procedia PDF Downloads 76
211 Determination of Temperature Dependent Characteristic Material Properties of Commercial Thermoelectric Modules

Authors: Ahmet Koyuncu, Abdullah Berkan Erdogmus, Orkun Dogu, Sinan Uygur

Abstract:

Thermoelectric modules are integrated to electronic components to keep their temperature in specific values in electronic cooling applications. They can be used in different ambient temperatures. The cold side temperatures of thermoelectric modules depend on their hot side temperatures, operation currents, and heat loads. Performance curves of thermoelectric modules are given at most two different hot surface temperatures in product catalogs. Characteristic properties are required to select appropriate thermoelectric modules in thermal design phase of projects. Generally, manufacturers do not provide characteristic material property values of thermoelectric modules to customers for confidentiality. Common commercial software applied like ANSYS ICEPAK, FloEFD, etc., include thermoelectric modules in their libraries. Therefore, they can be easily used to predict the effect of thermoelectric usage in thermal design. Some software requires only the performance values in different temperatures. However, others like ICEPAK require three temperature-dependent equations for material properties (Seebeck coefficient (α), electrical resistivity (β), and thermal conductivity (γ)). Since the number and the variety of thermoelectric modules are limited in this software, definitions of characteristic material properties of thermoelectric modules could be required. In this manuscript, the method of derivation of characteristic material properties from the datasheet of thermoelectric modules is presented. Material characteristics were estimated from two different performance curves by experimentally and numerically in this study. Numerical calculations are accomplished in ICEPAK by using a thermoelectric module exists in the ICEPAK library. A new experimental setup was established to perform experimental study. Because of similar results of numerical and experimental studies, it can be said that proposed equations are approved. This approximation can be suggested for the analysis includes different type or brand of TEC modules.

Keywords: electrical resistivity, material characteristics, thermal conductivity, thermoelectric coolers, seebeck coefficient

Procedia PDF Downloads 158
210 Efficacy of Celecoxib Adjunct Treatment on Bipolar Disorder: Systematic Review and Meta-Analysis

Authors: Daniela V. Bavaresco, Tamy Colonetti, Antonio Jose Grande, Francesc Colom, Joao Quevedo, Samira S. Valvassori, Maria Ines da Rosa

Abstract:

Objective: Performed a systematic review and meta-analysis to evaluated the potential effect of the cyclo-oxygenases (Cox)-2 inhibitor Celecoxib adjunct treatment in Bipolar Disorder (BD), through of randomized controlled trials. Method: A search of the electronic databases was proceeded, on MEDLINE, EMBASE, Scopus, Cochrane Central Register of Controlled Trials (CENTRAL), Biomed Central, Web of Science, IBECS, LILACS, PsycINFO (American Psychological Association), Congress Abstracts, and Grey literature (Google Scholar and the British Library) for studies published from January 1990 to February 2018. A search strategy was developed using the terms: 'Bipolar disorder' or 'Bipolar mania' or 'Bipolar depression' or 'Bipolar mixed' or 'Bipolar euthymic' and 'Celecoxib' or 'Cyclooxygenase-2 inhibitors' or 'Cox-2 inhibitors' as text words and Medical Subject Headings (i.e., MeSH and EMTREE) and searched. The therapeutic effects of adjunctive treatment with Celecoxib were analyzed, it was possible to carry out a meta-analysis of three studies included in the systematic review. The meta-analysis was performed including the final results of the Young Mania Rating Scale (YMRS) at the end of randomized controlled trials (RCT). Results: Three primary studies were included in the systematic review, with a total of 121 patients. The meta-analysis had significant effect in the YMRS scores from patients with BD who used Celecoxib adjuvant treatment in comparison to placebo. The weighted mean difference was 5.54 (95%CI=3.26-7.82); p < 0.001; I2 =0%). Conclusion: The systematic review suggests that adjuvant treatment with Celecoxib improves the response of major treatments in patients with BD when compared with adjuvant placebo treatment.

Keywords: bipolar disorder, Cox-2 inhibitors, Celecoxib, systematic review, meta-analysis

Procedia PDF Downloads 468
209 Globally Convergent Sequential Linear Programming for Multi-Material Topology Optimization Using Ordered Solid Isotropic Material with Penalization Interpolation

Authors: Darwin Castillo Huamaní, Francisco A. M. Gomes

Abstract:

The aim of the multi-material topology optimization (MTO) is to obtain the optimal topology of structures composed by many materials, according to a given set of constraints and cost criteria. In this work, we seek the optimal distribution of materials in a domain, such that the flexibility of the structure is minimized, under certain boundary conditions and the intervention of external forces. In the case we have only one material, each point of the discretized domain is represented by two values from a function, where the value of the function is 1 if the element belongs to the structure or 0 if the element is empty. A common way to avoid the high computational cost of solving integer variable optimization problems is to adopt the Solid Isotropic Material with Penalization (SIMP) method. This method relies on the continuous interpolation function, power function, where the base variable represents a pseudo density at each point of domain. For proper exponent values, the SIMP method reduces intermediate densities, since values other than 0 or 1 usually does not have a physical meaning for the problem. Several extension of the SIMP method were proposed for the multi-material case. The one that we explore here is the ordered SIMP method, that has the advantage of not being based on the addition of variables to represent material selection, so the computational cost is independent of the number of materials considered. Although the number of variables is not increased by this algorithm, the optimization subproblems that are generated at each iteration cannot be solved by methods that rely on second derivatives, due to the cost of calculating the second derivatives. To overcome this, we apply a globally convergent version of the sequential linear programming method, which solves a linear approximation sequence of optimization problems.

Keywords: globally convergence, multi-material design ordered simp, sequential linear programming, topology optimization

Procedia PDF Downloads 286
208 Research on the Spatio-Temporal Evolution Pattern of Traffic Dominance in Shaanxi Province

Authors: Leng Jian-Wei, Wang Lai-Jun, Li Ye

Abstract:

In order to measure and analyze the transportation situation within the counties of Shaanxi province over a certain period of time and to promote the province's future transportation planning and development, this paper proposes a reasonable layout plan and compares model rationality. The study uses entropy weight method to measure the transportation advantages of 107 counties in Shaanxi province from three dimensions: road network density, trunk line influence and location advantage in 2013 and 2021, and applies spatial autocorrelation analysis method to analyze the spatial layout and development trend of county-level transportation, and conducts ordinary least square (OLS)regression on transportation impact factors and other influencing factors. The paper also compares the regression fitting degree of the Geographically weighted regression(GWR) model and the OLS model. The results show that spatially, the transportation advantages of Shaanxi province generally show a decreasing trend from the Weihe Plain to the surrounding areas and mainly exhibit high-high clustering phenomenon. Temporally, transportation advantages show an overall upward trend, and the phenomenon of spatial imbalance gradually decreases. People's travel demands have changed to some extent, and the demand for rapid transportation has increased overall. The GWR model regression fitting degree of transportation advantages is 0.74, which is higher than the OLS regression model's fitting degree of 0.64. Based on the evolution of transportation advantages, it is predicted that this trend will continue for a period of time in the future. To improve the transportation advantages of Shaanxi province increasing the layout of rapid transportation can effectively enhance the transportation advantages of Shaanxi province. When analyzing spatial heterogeneity, geographic factors should be considered to establish a more reliable model

Keywords: traffic dominance, GWR model, spatial autocorrelation analysis, temporal and spatial evolution

Procedia PDF Downloads 65
207 Landslide and Liquefaction Vulnerability Analysis Using Risk Assessment Analysis and Analytic Hierarchy Process Implication: Suitability of the New Capital of the Republic of Indonesia on Borneo Island

Authors: Rifaldy, Misbahudin, Khalid Rizky, Ricky Aryanto, M. Alfiyan Bagus, Fahri Septianto, Firman Najib Wibisana, Excobar Arman

Abstract:

Indonesia is a country that has a high level of disaster because it is on the ring of fire, and there are several regions with three major plates meeting in the world. So that disaster analysis must always be done to see the potential disasters that might always occur, especially in this research are landslides and liquefaction. This research was conducted to analyze areas that are vulnerable to landslides and liquefaction hazards and their relationship with the assessment of the issue of moving the new capital of the Republic of Indonesia to the island of Kalimantan with a total area of 612,267.22 km². The method in this analysis uses the Analytical Hierarchy Process and consistency ratio testing as a complex and unstructured problem-solving process into several parameters by providing values. The parameters used in this analysis are the slope, land cover, lithology distribution, wetness index, earthquake data, peak ground acceleration. Weighted overlay was carried out from all these parameters using the percentage value obtained from the Analytical Hierarchy Process and confirmed its accuracy with a consistency ratio so that a percentage of the area obtained with different vulnerability classification values was obtained. Based on the analysis results obtained vulnerability classification from very high to low vulnerability. There are (0.15%) 918.40083 km² of highly vulnerable, medium (20.75%) 127,045,44815 km², low (56.54%) 346,175.886188 km², very low (22.56%) 138,127.484832 km². This research is expected to be able to map landslides and liquefaction disasters on the island of Kalimantan and provide consideration of the suitability of regional development of the new capital of the Republic of Indonesia. Also, this research is expected to provide input or can be applied to all regions that are analyzing the vulnerability of landslides and liquefaction or the suitability of the development of certain regions.

Keywords: analytic hierarchy process, Borneo Island, landslide and liquefaction, vulnerability analysis

Procedia PDF Downloads 141
206 Utility, Satisfaction and Necessity of Urban Parks: An Empirical Study of Two Suburban Parks of Kolkata Metropolitan Area, India

Authors: Jaydip De

Abstract:

Urban parks are open places, green fields and riverside gardens usually maintained by public or private authorities, or eventually by both jointly; and utilized for a multidimensional purpose by the citizens. These parks are indeed the lung of urban centers. In urban socio-environmental setup, parks are the nucleus of social integration, community building, and physical development. In contemporary cities, these green places seem to perform as the panacea of congested, complex and stressful urban life. The alarmingly increasing urban population and the resultant congestion of high-rises are making life wearisome in neo-liberal cities. This has made the citizen always quest for open space and fresh air. In such a circumstance, the mere existence of parks is not capable of satisfying the growing aspirations. Therefore in this endeavour, a structured attempt is so made to empirically identify the utility, visitors’ satisfaction, and future needs through the cases of two urban parks of Kolkata Metropolitan Area, India. This study is principally based upon primary information collected through visitors’ perception survey conducted at the Chinsurah ground and Chandernagore strand. The correlation between different utility categories is identified and analyzed systematically. At the same time, indices like Weighted Satisfaction Score (WSS), Facility wise Satisfaction Index (FSI), Urban Park Satisfaction Index (UPSI) and Urban Park Necessity Index (UPNI) are advocated to quantify the visitors’ satisfaction and future necessities. It is explored that the most important utilities are passive in nature. Simultaneously, satisfaction levels of visitors are average, and their requirements are centred on the daily needs of the next generation, i.e., the children. Further, considering the visitors’ opinion planning measures are promulgated for holistic development of urban parks to revitalize sustainability of citified life.

Keywords: citified life, future needs, visitors’ satisfaction, urban parks, utility

Procedia PDF Downloads 147
205 Climate Change Effects in a Mediterranean Island and Streamflow Changes for a Small Basin Using Euro-Cordex Regional Climate Simulations Combined with the SWAT Model

Authors: Pier Andrea Marras, Daniela Lima, Pedro Matos Soares, Rita Maria Cardoso, Daniela Medas, Elisabetta Dore, Giovanni De Giudici

Abstract:

Climate change effects on the hydrologic cycle are the main concern for the evaluation of water management strategies. Climate models project scenarios of precipitation changes in the future, considering greenhouse emissions. In this study, the EURO-CORDEX (European Coordinated Regional Downscaling Experiment) climate models were first evaluated in a Mediterranean island (Sardinia) against observed precipitation for a historical reference period (1976-2005). A weighted multi-model ensemble (ENS) was built, weighting the single models based on their ability to reproduce observed rainfall. Future projections (2071-2100) were carried out using the 8.5 RCP emissions scenario to evaluate changes in precipitations. ENS was then used as climate forcing for the SWAT model (Soil and Water Assessment Tool), with the aim to assess the consequences of such projected changes on streamflow and runoff of two small catchments located in the South-West Sardinia. Results showed that a decrease of mean rainfall values, up to -25 % at yearly scale, is expected for the future, along with an increase of extreme precipitation events. Particularly in the eastern and southern areas, extreme events are projected to increase by 30%. Such changes reflect on the hydrologic cycle with a decrease of mean streamflow and runoff, except in spring, when runoff is projected to increase by 20-30%. These results stress that the Mediterranean is a hotspot for climate change, and the use of model tools can provide very useful information to adopt water and land management strategies to deal with such changes.

Keywords: EURO-CORDEX, climate change, hydrology, SWAT model, Sardinia, multi-model ensemble

Procedia PDF Downloads 193
204 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.

Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)

Procedia PDF Downloads 294
203 First Approximation to Congenital Anomalies in Kemp's Ridley Sea Turtle (Lepidochelys kempii) in Veracruz, Mexico

Authors: Judith Correa-Gomez, Cristina Garcia-De la Pena, Veronica Avila-Rodriguez, David R. Aguillon-Gutierrez

Abstract:

Kemp's ridley (Lepidochelys kempii) is the smallest species of sea turtle. It nests on the beaches of the Gulf of Mexico during summer. To date, there is no information about congenital anomalies in this species, which could be an important factor to be considered as a survival threat. The aim of this study was to determine congenital anomalies in dead embryos and hatchlings of Kemp's ridley sea turtle during 2020 nesting season. Fieldwork was conducted at the 'Campamento Tortugero Barra Norte', on the shores of Tuxpan, Veracruz, Mexico. A total of 95 nests were evaluated, from which 223 dead embryos and hatchlings were collected. Anomalies were detected by detailed physical examinations. Photographs of each anomaly were taken. From the 223 dead turtles, 213 (95%) showed a congenital anomaly. A total of 53 types of congenital anomalies were found: 22 types on the head region, 21 on the carapace region, 6 on the flipper region, and 4 regarding the entire body. The most prevalent anomaly in the head region was the presence of prefrontal supernumerary scales (42%, 93 occurrences). On the carapace region, the most common anomaly was the presence of supernumerary gular scales (59%, 131 occurrences). The two most common anomalies on the flipper region were amelia in fore flippers and rear bifurcation of flippers (0.9%, 2 occurrences each). The most common anomaly involving the entire body was hypomelanism (35%, 79 occurrences). These results agree with the recent studies on congenital malformations on sea turtles, being the head and the carapace regions the ones with the highest number of congenital anomalies. It is unknown whether the reported anomalies can be related to the death of these individuals. However, it is necessary to develop embryological studies in this species. To our best knowledge, this is the first worldwide report on Kemp’s ridley sea turtle anomalies.

Keywords: Amelia, hypomelanism, morphology, supernumerary scales

Procedia PDF Downloads 136
202 Normalized P-Laplacian: From Stochastic Game to Image Processing

Authors: Abderrahim Elmoataz

Abstract:

More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.

Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems

Procedia PDF Downloads 490
201 Quantification and Evaluation of Tumors Heterogeneity Utilizing Multimodality Imaging

Authors: Ramin Ghasemi Shayan, Morteza Janebifam

Abstract:

Tumors are regularly inhomogeneous. Provincial varieties in death, metabolic action, multiplication and body part are watched. There’s expanding proof that strong tumors may contain subpopulations of cells with various genotypes and phenotypes. These unmistakable populaces of malignancy cells can connect during a serious way and may contrast in affectability to medications. Most tumors show organic heterogeneity1–3 remembering heterogeneity for genomic subtypes, varieties inside the statement of development variables and genius, and hostile to angiogenic factors4–9 and varieties inside the tumoural microenvironment. These can present as contrasts between tumors in a few people. for instance, O6-methylguanine-DNA methyltransferase, a DNA fix compound, is hushed by methylation of the quality advertiser in half of glioblastoma (GBM), adding to chemosensitivity, and improved endurance. From the outset, there includes been specific enthusiasm inside the usage of dissemination weighted imaging (DWI) and dynamic complexity upgraded MRI (DCE-MRI). DWI sharpens MRI to water dispersion inside the extravascular extracellular space (EES) and is wiped out with the size and setup of the cell populace. Additionally, DCE-MRI utilizes dynamic obtaining of pictures during and after the infusion of intravenous complexity operator. Signal changes are additionally changed to outright grouping of differentiation permitting examination utilizing pharmacokinetic models. PET scan modality gives one of a kind natural particularity, permitting dynamic or static imaging of organic atoms marked with positron emanating isotopes (for example, 15O, 18F, 11C). The strategy is explained to a colossal radiation portion, which points of confinement rehashed estimations, particularly when utilized together with PC tomography (CT). At long last, it's of incredible enthusiasm to quantify territorial hemoglobin state, which could be joined with DCE-CT vascular physiology estimation to create significant experiences for understanding tumor hypoxia.

Keywords: heterogeneity, computerized tomography scan, magnetic resonance imaging, PET

Procedia PDF Downloads 123