Search results for: Euclidean
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 66

Search results for: Euclidean

36 A Multi-Population DE with Adaptive Mutation and Local Search for Global Optimization

Authors: Zhoucheng Bao, Haiyan Zhu, Tingting Pang, Zuling Wang

Abstract:

This paper proposes a multi-population DE with adaptive mutation and local search for global optimization, named AMMADE. In order to better coordinate the cooperation between the populations and the rational use of resources. In AMMADE, the population is divided based on the Euclidean distance sorting method at each generation to appropriately coordinate the cooperation between subpopulations and the usage of resources, such that the best-performed subpopulation will get more computing resources in the next generation. Further, an adaptive local search strategy is employed on the best-performed subpopulation to achieve a balanced search. The proposed algorithm has been tested by solving optimization problems taken from CEC2014 benchmark problems. Experimental results show that our algorithm can achieve a competitive or better than related methods. The results also confirm the significance of devised strategies in the proposed algorithm.

Keywords: differential evolution, multi-mutation strategies, memetic algorithm, adaptive local search

Procedia PDF Downloads 115
35 Philippine Site Suitability Analysis for Biomass, Hydro, Solar, and Wind Renewable Energy Development Using Geographic Information System Tools

Authors: Jara Kaye S. Villanueva, M. Rosario Concepcion O. Ang

Abstract:

For the past few years, Philippines has depended most of its energy source on oil, coal, and fossil fuel. According to the Department of Energy (DOE), the dominance of coal in the energy mix will continue until the year 2020. The expanding energy needs in the country have led to increasing efforts to promote and develop renewable energy. This research is a part of the government initiative in preparation for renewable energy development and expansion in the country. The Philippine Renewable Energy Resource Mapping from Light Detection and Ranging (LiDAR) Surveys is a three-year government project which aims to assess and quantify the renewable energy potential of the country and to put them into usable maps. This study focuses on the site suitability analysis of the four renewable energy sources – biomass (coconut, corn, rice, and sugarcane), hydro, solar, and wind energy. The site assessment is a key component in determining and assessing the most suitable locations for the construction of renewable energy power plants. This method maximizes the use of both the technical methods in resource assessment, as well as taking into account the environmental, social, and accessibility aspect in identifying potential sites by utilizing and integrating two different methods: the Multi-Criteria Decision Analysis (MCDA) method and Geographic Information System (GIS) tools. For the MCDA, Analytical Hierarchy Processing (AHP) is employed to determine the parameters needed for the suitability analysis. To structure these site suitability parameters, various experts from different fields were consulted – scientists, policy makers, environmentalists, and industrialists. The need to have a well-represented group of people to consult with is relevant to avoid bias in the output parameter of hierarchy levels and weight matrices. AHP pairwise matrix computation is utilized to derive weights per level out of the expert’s gathered feedback. Whereas from the threshold values derived from related literature, international studies, and government laws, the output values were then consulted with energy specialists from the DOE. Geospatial analysis using GIS tools translate this decision support outputs into visual maps. Particularly, this study uses Euclidean distance to compute for the distance values of each parameter, Fuzzy Membership algorithm which normalizes the output from the Euclidean Distance, and the Weighted Overlay tool for the aggregation of the layers. Using the Natural Breaks algorithm, the suitability ratings of each of the map are classified into 5 discrete categories of suitability index: (1) not suitable (2) least suitable, (3) suitable, (4) moderately suitable, and (5) highly suitable. In this method, the classes are grouped based on the best groups similar values wherein each subdivision are set from the rest based on the big difference in boundary values. Results show that in the entire Philippine area of responsibility, biomass has the highest suitability rating with rice as the most suitable at 75.76% suitability percentage, whereas wind has the least suitability percentage with score 10.28%. Solar and Hydro fall in the middle of the two, with suitability values 28.77% and 21.27%.

Keywords: site suitability, biomass energy, hydro energy, solar energy, wind energy, GIS

Procedia PDF Downloads 122
34 Numerical Applications of Tikhonov Regularization for the Fourier Multiplier Operators

Authors: Fethi Soltani, Adel Almarashi, Idir Mechai

Abstract:

Tikhonov regularization and reproducing kernels are the most popular approaches to solve ill-posed problems in computational mathematics and applications. And the Fourier multiplier operators are an essential tool to extend some known linear transforms in Euclidean Fourier analysis, as: Weierstrass transform, Poisson integral, Hilbert transform, Riesz transforms, Bochner-Riesz mean operators, partial Fourier integral, Riesz potential, Bessel potential, etc. Using the theory of reproducing kernels, we construct a simple and efficient representations for some class of Fourier multiplier operators Tm on the Paley-Wiener space Hh. In addition, we give an error estimate formula for the approximation and obtain some convergence results as the parameters and the independent variables approaches zero. Furthermore, using numerical quadrature integration rules to compute single and multiple integrals, we give numerical examples and we write explicitly the extremal function and the corresponding Fourier multiplier operators.

Keywords: fourier multiplier operators, Gauss-Kronrod method of integration, Paley-Wiener space, Tikhonov regularization

Procedia PDF Downloads 286
33 Evaluation of Fusion Sonar and Stereo Camera System for 3D Reconstruction of Underwater Archaeological Object

Authors: Yadpiroon Onmek, Jean Triboulet, Sebastien Druon, Bruno Jouvencel

Abstract:

The objective of this paper is to develop the 3D underwater reconstruction of archaeology object, which is based on the fusion between a sonar system and stereo camera system. The underwater images are obtained from a calibrated camera system. The multiples image pairs are input, and we first solve the problem of image processing by applying the well-known filter, therefore to improve the quality of underwater images. The features of interest between image pairs are selected by well-known methods: a FAST detector and FLANN descriptor. Subsequently, the RANSAC method is applied to reject outlier points. The putative inliers are matched by triangulation to produce the local sparse point clouds in 3D space, using a pinhole camera model and Euclidean distance estimation. The SFM technique is used to carry out the global sparse point clouds. Finally, the ICP method is used to fusion the sonar information with the stereo model. The final 3D models have a précised by measurement comparing with the real object.

Keywords: 3D reconstruction, archaeology, fusion, stereo system, sonar system, underwater

Procedia PDF Downloads 278
32 Hierarchical Cluster Analysis of Raw Milk Samples Obtained from Organic and Conventional Dairy Farming in Autonomous Province of Vojvodina, Serbia

Authors: Lidija Jevrić, Denis Kučević, Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Milica Karadžić

Abstract:

In the present study, the Hierarchical Cluster Analysis (HCA) was applied in order to determine the differences between the milk samples originating from a conventional dairy farm (CF) and an organic dairy farm (OF) in AP Vojvodina, Republic of Serbia. The clustering was based on the basis of the average values of saturated fatty acids (SFA) content and unsaturated fatty acids (UFA) content obtained for every season. Therefore, the HCA included the annual SFA and UFA content values. The clustering procedure was carried out on the basis of Euclidean distances and Single linkage algorithm. The obtained dendrograms indicated that the clustering of UFA in OF was much more uniform compared to clustering of UFA in CF. In OF, spring stands out from the other months of the year. The same case can be noticed for CF, where winter is separated from the other months. The results could be expected because the composition of fatty acids content is greatly influenced by the season and nutrition of dairy cows during the year.

Keywords: chemometrics, clustering, food engineering, milk quality

Procedia PDF Downloads 248
31 Finite Eigenstrains in Nonlinear Elastic Solid Wedges

Authors: Ashkan Golgoon, Souhayl Sadik, Arash Yavari

Abstract:

Eigenstrains in nonlinear solids are created due to anelastic effects such as non-uniform temperature distributions, growth, remodeling, and defects. Eigenstrains understanding is indispensable, as they can generate residual stresses and strongly affect the overall response of solids. Here, we study the residual stress and deformation fields of an incompressible isotropic infinite wedge with a circumferentially-symmetric distribution of finite eigenstrains. We construct a material manifold, whose Riemannian metric explicitly depends on the eigenstrain distribution, thereby we turn the problem into a classical nonlinear elasticity problem, where we find an embedding of the Riemannian material manifold into the ambient Euclidean space. In particular, we find exact solutions for the residual stress and deformation fields of a neo-Hookean wedge having a symmetric inclusion with finite radial and circumferential eigenstrains. Moreover, we numerically solve a similar problem when a symmetric Mooney-Rivlin inhomogeneity with finite eigenstrains is placed in a neo-Hookean wedge. Generalization of the eigenstrain problem to other geometries are also discussed.

Keywords: finite eigenstrains, geometric mechanics, inclusion, inhomogeneity, nonlinear elasticity

Procedia PDF Downloads 224
30 Content-Based Mammograms Retrieval Based on Breast Density Criteria Using Bidimensional Empirical Mode Decomposition

Authors: Sourour Khouaja, Hejer Jlassi, Nadia Feddaoui, Kamel Hamrouni

Abstract:

Most medical images, and especially mammographies, are now stored in large databases. Retrieving a desired image is considered of great importance in order to find previous similar cases diagnosis. Our method is implemented to assist radiologists in retrieving mammographic images containing breast with similar density aspect as seen on the mammogram. This is becoming a challenge seeing the importance of density criteria in cancer provision and its effect on segmentation issues. We used the BEMD (Bidimensional Empirical Mode Decomposition) to characterize the content of images and Euclidean distance measure similarity between images. Through the experiments on the MIAS mammography image database, we confirm that the results are promising. The performance was evaluated using precision and recall curves comparing query and retrieved images. Computing recall-precision proved the effectiveness of applying the CBIR in the large mammographic image databases. We found a precision of 91.2% for mammography with a recall of 86.8%.

Keywords: BEMD, breast density, contend-based, image retrieval, mammography

Procedia PDF Downloads 205
29 Vector Quantization Based on Vector Difference Scheme for Image Enhancement

Authors: Biji Jacob

Abstract:

Vector quantization algorithm which uses minimum distance calculation for codebook generation, a time consuming calculation performed on each pixel values leads to computation complexity. The codebook is updated by comparing the distance of each vector to their centroid vector and measure for their closeness. In this paper vector quantization is modified based on vector difference algorithm for image enhancement purpose. In the proposed scheme, vector differences between the vectors are considered as the new generation vectors or new codebook vectors. The codebook is updated by comparing the new generation vector with a threshold value having minimum error with the parent vector. The minimum error decides the fitness of each newly generated vector. Thus the codebook is generated in an adaptive manner and the fitness value is determined for the suppression of the degraded portion of the image and thereby leads to the enhancement of the image through the adaptive searching capability of the vector quantization through vector difference algorithm. Experimental results shows that the vector difference scheme efficiently modifies the vector quantization algorithm for enhancing the image with peak signal to noise ratio (PSNR), mean square error (MSE), Euclidean distance (E_dist) as the performance parameters.

Keywords: codebook, image enhancement, vector difference, vector quantization

Procedia PDF Downloads 231
28 On the Influence of the Metric Space in the Critical Behavior of Magnetic Temperature

Authors: J. C. Riaño-Rojas, J. D. Alzate-Cardona, E. Restrepo-Parra

Abstract:

In this work, a study of generic magnetic nanoparticles varying the metric space is presented. As the metric space is changed, the nanoparticle form and the inner product are also varied, since the energetic scale is not conserved. This study is carried out using Monte Carlo simulations combined with the Wolff embedding and Metropolis algorithms. The Metropolis algorithm is used at high temperature regions to reach the equilibrium quickly. The Wolff embedding algorithm is used at low and critical temperature regions in order to reduce the critical slowing down phenomenon. The ions number is kept constant for the different forms and the critical temperatures using finite size scaling are found. We observed that critical temperatures don't exhibit significant changes when the metric space was varied. Additionally, the effective dimension according the metric space was determined. A study of static behavior for reaching the static critical exponents was developed. The objective of this work is to observe the behavior of the thermodynamic quantities as energy, magnetization, specific heat, susceptibility and Binder's cumulants at the critical region, in order to demonstrate if the magnetic nanoparticles describe their magnetic interactions in the Euclidean space or if there is any correspondence in other metric spaces.

Keywords: nanoparticles, metric, Monte Carlo, critical behaviour

Procedia PDF Downloads 483
27 Electricity Generation from Renewables and Targets: An Application of Multivariate Statistical Techniques

Authors: Filiz Ersoz, Taner Ersoz, Tugrul Bayraktar

Abstract:

Renewable energy is referred to as "clean energy" and common popular support for the use of renewable energy (RE) is to provide electricity with zero carbon dioxide emissions. This study provides useful insight into the European Union (EU) RE, especially, into electricity generation obtained from renewables, and their targets. The objective of this study is to identify groups of European countries, using multivariate statistical analysis and selected indicators. The hierarchical clustering method is used to decide the number of clusters for EU countries. The conducted statistical hierarchical cluster analysis is based on the Ward’s clustering method and squared Euclidean distances. Hierarchical cluster analysis identified eight distinct clusters of European countries. Then, non-hierarchical clustering (k-means) method was applied. Discriminant analysis was used to determine the validity of the results with data normalized by Z score transformation. To explore the relationship between the selected indicators, correlation coefficients were computed. The results of the study reveal the current situation of RE in European Union Member States.

Keywords: share of electricity generation, k-means clustering, discriminant, CO2 emission

Procedia PDF Downloads 389
26 A Quantum Leap: Developing Quantum Semi-Structured Complex Numbers to Solve the “Division by Zero” Problem

Authors: Peter Jean-Paul, Shanaz Wahid

Abstract:

The problem of division by zero can be stated as: “what is the value of 0 x 1/0?” This expression has been considered undefined by mathematicians because it can have two equally valid solutions either 0 or 1. Recently semi-structured complex number set was invented to solve “division by zero”. However, whilst the number set had some merits it was considered to have a poor theoretical foundation and did not provide a quality solution to “division by zero”. Moreover, the set lacked consistency in simple algebraic calculations producing contradictory results when dividing by zero. To overcome these issues this research starts by treating the expression " 0 x 1/0" as a quantum mechanical system that produces two tangled results 0 and 1. Dirac Notation (a tool from quantum mechanics) was then used to redefine the unstructured unit p in semi-structured complex numbers so that p represents the superposition of two results (0 and 1) and collapses into a single value when used in algebraic expressions. In the process, this paper describes a new number set called Quantum Semi-structured Complex Numbers that provides a valid solution to the problem of “division by zero”. This research shows that this new set (1) forms a “Field”, (2) can produce consistent results when solving division by zero problems, (3) can be used to accurately describe systems whose mathematical descriptions involve division by zero. This research served to provide a firm foundation for Quantum Semi-structured Complex Numbers and support their practical use.

Keywords: division by zero, semi-structured complex numbers, quantum mechanics, Hilbert space, Euclidean space

Procedia PDF Downloads 123
25 Variable vs. Fixed Window Width Code Correlation Reference Waveform Receivers for Multipath Mitigation in Global Navigation Satellite Systems with Binary Offset Carrier and Multiplexed Binary Offset Carrier Signals

Authors: Fahad Alhussein, Huaping Liu

Abstract:

This paper compares the multipath mitigation performance of code correlation reference waveform receivers with variable and fixed window width, for binary offset carrier and multiplexed binary offset carrier signals typically used in global navigation satellite systems. In the variable window width method, such width is iteratively reduced until the distortion on the discriminator with multipath is eliminated. This distortion is measured as the Euclidean distance between the actual discriminator (obtained with the incoming signal), and the local discriminator (generated with a local copy of the signal). The variable window width have shown better performance compared to the fixed window width. In particular, the former yields zero error for all delays for the BOC and MBOC signals considered, while the latter gives rather large nonzero errors for small delays in all cases. Due to its computational simplicity, the variable window width method is perfectly suitable for implementation in low-cost receivers.

Keywords: correlation reference waveform receivers, binary offset carrier, multiplexed binary offset carrier, global navigation satellite systems

Procedia PDF Downloads 92
24 Chinese Travelers’ Outbound Intentions to Visit Short-and-Long Haul Destinations: The Impact of Cultural Distance

Authors: Lei Qin

Abstract:

Culture has long been recognized as a possible reason to influence travelers’ decisions, which explains why travelers in different countries make distinct decisions. Cultural distance is a concept illustrating how much difference there is between travelers’ home culture and that of the destination, but the research in distinguishing short-and-long haul travel destinations is limited. This study explored the research gap by examining the impact of cultural distance on Chinese travelers’ intentions to visit short-haul and long-haul destinations, respectively. Six cultural distance measurements, including five measurements calculated from secondary database (Kogut & Singh, Developed Kogut & Singh, Euclidean distance Index (EDI), world value survey index (WVS), social axioms measurement (SAM)) and perceived cultural distance (PCD) collected from the primary survey. Of the six measurements, culture distance has the opposite impact on Chinese outbound travelers’ intentions in the short-haul and long haul. For short-haul travel, travelers’ intentions for traveling can be positive influenced by cultural distance; a possible reason is that travelers’ novelty-seeking satisfaction is greater than the strangeness obtained from overseas regions. For long-haul travel, travelers’ intentions for traveling can be negative influenced by cultural distance, a possible explanation is that travelers’ uncertainty, risk, and language concerns of farther destinations.

Keywords: cultural distance, intention, outbound travel, short-long haul

Procedia PDF Downloads 160
23 Enhanced Model for Risk-Based Assessment of Employee Security with Bring Your Own Device Using Cyber Hygiene

Authors: Saidu I. R., Shittu S. S.

Abstract:

As the trend of personal devices accessing corporate data continues to rise through Bring Your Own Device (BYOD) practices, organizations recognize the potential cost reduction and productivity gains. However, the associated security risks pose a significant threat to these benefits. Often, organizations adopt BYOD environments without fully considering the vulnerabilities introduced by human factors in this context. This study presents an enhanced assessment model that evaluates the security posture of employees in BYOD environments using cyber hygiene principles. The framework assesses users' adherence to best practices and guidelines for maintaining a secure computing environment, employing scales and the Euclidean distance formula. By utilizing this algorithm, the study measures the distance between users' security practices and the organization's optimal security policies. To facilitate user evaluation, a simple and intuitive interface for automated assessment is developed. To validate the effectiveness of the proposed framework, design science research methods are employed, and empirical assessments are conducted using five artifacts to analyze user suitability in BYOD environments. By addressing the human factor vulnerabilities through the assessment of cyber hygiene practices, this study aims to enhance the overall security of BYOD environments and enable organizations to leverage the advantages of this evolving trend while mitigating potential risks.

Keywords: security, BYOD, vulnerability, risk, cyber hygiene

Procedia PDF Downloads 39
22 A Design of Elliptic Curve Cryptography Processor based on SM2 over GF(p)

Authors: Shiji Hu, Lei Li, Wanting Zhou, DaoHong Yang

Abstract:

The data encryption, is the foundation of today’s communication. On this basis, how to improve the speed of data encryption and decryption is always a problem that scholars work for. In this paper, we proposed an elliptic curve crypto processor architecture based on SM2 prime field. In terms of hardware implementation, we optimized the algorithms in different stages of the structure. In finite field modulo operation, we proposed an optimized improvement of Karatsuba-Ofman multiplication algorithm, and shorten the critical path through pipeline structure in the algorithm implementation. Based on SM2 recommended prime field, a fast modular reduction algorithm is used to reduce 512-bit wide data obtained from the multiplication unit. The radix-4 extended Euclidean algorithm was used to realize the conversion between affine coordinate system and Jacobi projective coordinate system. In the parallel scheduling of point operations on elliptic curves, we proposed a three-level parallel structure of point addition and point double based on the Jacobian projective coordinate system. Combined with the scalar multiplication algorithm, we added mutual pre-operation to the point addition and double point operation to improve the efficiency of the scalar point multiplication. The proposed ECC hardware architecture was verified and implemented on Xilinx Virtex-7 and ZYNQ-7 platforms, and each 256-bit scalar multiplication operation took 0.275ms. The performance for handling scalar multiplication is 32 times that of CPU(dual-core ARM Cortex-A9).

Keywords: Elliptic curve cryptosystems, SM2, modular multiplication, point multiplication.

Procedia PDF Downloads 53
21 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based on Local Color Histograms

Authors: Mawloud Mosbah, Bachir Boucheham

Abstract:

Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.

Keywords: CBIR, color global histogram, color local histogram, weak segmentation, Euclidean distance

Procedia PDF Downloads 333
20 Aspects and Studies of Fractal Geometry in Automatic Breast Cancer Detection

Authors: Mrinal Kanti Bhowmik, Kakali Das Jr., Barin Kumar De, Debotosh Bhattacharjee

Abstract:

Breast cancer is the most common cancer and a leading cause of death for women in the 35 to 55 age group. Early detection of breast cancer can decrease the mortality rate of breast cancer. Mammography is considered as a ‘Gold Standard’ for breast cancer detection and a very popular modality, presently used for breast cancer screening and detection. The screening of digital mammograms often leads to over diagnosis and a consequence to unnecessary traumatic & painful biopsies. For that reason recent studies involving the use of thermal imaging as a screening technique have generated a growing interest especially in cases where the mammography is limited, as in young patients who have dense breast tissue. Tumor is a significant sign of breast cancer in both mammography and thermography. The tumors are complex in structure and they also exhibit a different statistical and textural features compared to the breast background tissue. Fractal geometry is a geometry which is used to describe this type of complex structure as per their main characteristic, where traditional Euclidean geometry fails. Over the last few years, fractal geometrics have been applied mostly in many medical image (1D, 2D, or 3D) analysis applications. In breast cancer detection using digital mammogram images, also it plays a significant role. Fractal is also used in thermography for early detection of the masses using the thermal texture. This paper presents an overview of the recent aspects and initiatives of fractals in breast cancer detection in both mammography and thermography. The scope of fractal geometry in automatic breast cancer detection using digital mammogram and thermogram images are analysed, which forms a foundation for further study on application of fractal geometry in medical imaging for improving the efficiency of automatic detection.

Keywords: fractal, tumor, thermography, mammography

Procedia PDF Downloads 352
19 ADP Approach to Evaluate the Blood Supply Network of Ontario

Authors: Usama Abdulwahab, Mohammed Wahab

Abstract:

This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.

Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem

Procedia PDF Downloads 478
18 Effect of Distance to Health Facilities on Maternal Service Use and Neonatal Mortality in Ethiopia

Authors: Getiye Dejenu Kibret, Daniel Demant, Andrew Hayen

Abstract:

Introduction: In Ethiopia, more than half of newborn babies do not have access to Emergency Obstetric and Neonatal Care (EmONC) services. Understanding the effect of distance to health facilities on service use and neonatal survival is crucial to recommend policymakers and improve resource distribution. We aimed to investigate the effect of distance to health services on maternal service use and neonatal mortality. Methods: We implemented a data linkage method based on geographic coordinates and calculated straight-line (Euclidean) distances from the Ethiopian 2016 demographic and health survey clusters to the closest health facility. We computed the distance in ESRI ArcGIS Version 10.3 using the geographic coordinates of DHS clusters and health facilities. Generalised Structural Equation Modelling (GSEM) was used to estimate the effect of distance on neonatal mortality. Results: Poor geographic accessibility to health facilities affects maternal service usage and increases the risk of newborn mortality. For every ten kilometres (km) increase in distance to a health facility, the odds of neonatal mortality increased by 1.33% (95% CI: 1.06% to 1.67%). Distance also negatively affected antenatal care, facility delivery and postnatal counselling service use. Conclusions: A lack of geographical access to health facilities decreases the likelihood of newborns surviving their first month of life and affects health services use during pregnancy and immediately after birth. The study also showed that antenatal care use was positively associated with facility delivery service use and that both positively influenced postnatal care use, demonstrating the interconnectedness of the continuum of care for maternal and neonatal care services. Policymakers can leverage the findings from this study to improve accessibility barriers to health services.

Keywords: acessibility, distance, maternal health service, neonatal mortality

Procedia PDF Downloads 75
17 Application of a Confirmatory Composite Model for Assessing the Extent of Agricultural Digitalization: A Case of Proactive Land Acquisition Strategy (PLAS) Farmers in South Africa

Authors: Mazwane S., Makhura M. N., Ginege A.

Abstract:

Digitalization in South Africa has received considerable attention from policymakers. The support for the development of the digital economy by the South African government has been demonstrated through the enactment of various national policies and strategies. This study sought to develop an index for agricultural digitalization by applying composite confirmatory analysis (CCA). Another aim was to determine the factors that affect the development of digitalization in PLAS farms. Data on the indicators of the three dimensions of digitalization were collected from 300 Proactive Land Acquisition Strategy (PLAS) farms in South Africa using semi-structured questionnaires. Confirmatory composite analysis (CCA) was employed to reduce the items into three digitalization dimensions and ultimately to a digitalization index. Standardized digitalization index scores were extracted and fitted to a linear regression model to determine the factors affecting digitalization development. The results revealed that the model shows practical validity and can be used to measure digitalization development as measures of fit (geodesic distance, standardized root mean square residual, and squared Euclidean distance) were all below their respective 95%quantiles of bootstrap discrepancies (HI95 values). Therefore, digitalization is an emergent variable that can be measured using CCA. The average level of digitalization in PLAS farms was 0.2 and varied significantly across provinces. The factors that significantly influence digitalization development in PLAS land reform farms were age, gender, farm type, network type, and cellular data type. This should enable researchers and policymakers to understand the level of digitalization and patterns of development, as well as correctly attribute digitalization development to the contributing factors.

Keywords: agriculture, digitalization, confirmatory composite model, land reform, proactive land acquisition strategy, South Africa

Procedia PDF Downloads 14
16 Normalized P-Laplacian: From Stochastic Game to Image Processing

Authors: Abderrahim Elmoataz

Abstract:

More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.

Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems

Procedia PDF Downloads 480
15 Curvature Based-Methods for Automatic Coarse and Fine Registration in Dimensional Metrology

Authors: Rindra Rantoson, Hichem Nouira, Nabil Anwer, Charyar Mehdi-Souzani

Abstract:

Multiple measurements by means of various data acquisition systems are generally required to measure the shape of freeform workpieces for accuracy, reliability and holisticity. The obtained data are aligned and fused into a common coordinate system within a registration technique involving coarse and fine registrations. Standardized iterative methods have been established for fine registration such as Iterative Closest Points (ICP) and its variants. For coarse registration, no conventional method has been adopted yet despite a significant number of techniques which have been developed in the literature to supply an automatic rough matching between data sets. Two main issues are addressed in this paper: the coarse registration and the fine registration. For coarse registration, two novel automated methods based on the exploitation of discrete curvatures are presented: an enhanced Hough Transformation (HT) and an improved Ransac Transformation. The use of curvature features in both methods aims to reduce computational cost. For fine registration, a new variant of ICP method is proposed in order to reduce registration error using curvature parameters. A specific distance considering the curvature similarity has been combined with Euclidean distance to define the distance criterion used for correspondences searching. Additionally, the objective function has been improved by combining the point-to-point (P-P) minimization and the point-to-plane (P-Pl) minimization with automatic weights. These ones are determined from the preliminary calculated curvature features at each point of the workpiece surface. The algorithms are applied on simulated and real data performed by a computer tomography (CT) system. The obtained results reveal the benefit of the proposed novel curvature-based registration methods.

Keywords: discrete curvature, RANSAC transformation, hough transformation, coarse registration, ICP variant, point-to-point and point-to-plane minimization combination, computer tomography

Procedia PDF Downloads 398
14 A Deep Learning Approach to Real Time and Robust Vehicular Traffic Prediction

Authors: Bikis Muhammed, Sehra Sedigh Sarvestani, Ali R. Hurson, Lasanthi Gamage

Abstract:

Vehicular traffic events have overly complex spatial correlations and temporal interdependencies and are also influenced by environmental events such as weather conditions. To capture these spatial and temporal interdependencies and make more realistic vehicular traffic predictions, graph neural networks (GNN) based traffic prediction models have been extensively utilized due to their capability of capturing non-Euclidean spatial correlation very effectively. However, most of the already existing GNN-based traffic prediction models have some limitations during learning complex and dynamic spatial and temporal patterns due to the following missing factors. First, most GNN-based traffic prediction models have used static distance or sometimes haversine distance mechanisms between spatially separated traffic observations to estimate spatial correlation. Secondly, most GNN-based traffic prediction models have not incorporated environmental events that have a major impact on the normal traffic states. Finally, most of the GNN-based models did not use an attention mechanism to focus on only important traffic observations. The objective of this paper is to study and make real-time vehicular traffic predictions while incorporating the effect of weather conditions. To fill the previously mentioned gaps, our prediction model uses a real-time driving distance between sensors to build a distance matrix or spatial adjacency matrix and capture spatial correlation. In addition, our prediction model considers the effect of six types of weather conditions and has an attention mechanism in both spatial and temporal data aggregation. Our prediction model efficiently captures the spatial and temporal correlation between traffic events, and it relies on the graph attention network (GAT) and Bidirectional bidirectional long short-term memory (Bi-LSTM) plus attention layers and is called GAT-BILSTMA.

Keywords: deep learning, real time prediction, GAT, Bi-LSTM, attention

Procedia PDF Downloads 43
13 Enhanced Tensor Tomographic Reconstruction: Integrating Absorption, Refraction and Temporal Effects

Authors: Lukas Vierus, Thomas Schuster

Abstract:

A general framework is examined for dynamic tensor field tomography within an inhomogeneous medium characterized by refraction and absorption, treated as an inverse source problem concerning the associated transport equation. Guided by Fermat’s principle, the Riemannian metric within the specified domain is determined by the medium's refractive index. While considerable literature exists on the inverse problem of reconstructing a tensor field from its longitudinal ray transform within a static Euclidean environment, limited inversion formulas and algorithms are available for general Riemannian metrics and time-varying tensor fields. It is established that tensor field tomography, akin to an inverse source problem for a transport equation, persists in dynamic scenarios. Framing dynamic tensor tomography as an inverse source problem embodies a comprehensive perspective within this domain. Ensuring well-defined forward mappings necessitates establishing existence and uniqueness for the underlying transport equations. However, the bilinear forms of the associated weak formulations fail to meet the coercivity condition. Consequently, recourse to viscosity solutions is taken, demonstrating their unique existence within suitable Sobolev spaces (in the static case) and Sobolev-Bochner spaces (in the dynamic case), under a specific assumption restricting variations in the refractive index. Notably, the adjoint problem can also be reformulated as a transport equation, with analogous results regarding uniqueness. Analytical solutions are expressed as integrals over geodesics, facilitating more efficient evaluation of forward and adjoint operators compared to solving partial differential equations. Certainly, here's the revised sentence in English: Numerical experiments are conducted using a Nesterov-accelerated Landweber method, encompassing various fields, absorption coefficients, and refractive indices, thereby illustrating the enhanced reconstruction achieved through this holistic modeling approach.

Keywords: attenuated refractive dynamic ray transform of tensor fields, geodesics, transport equation, viscosity solutions

Procedia PDF Downloads 9
12 Climate Species Lists: A Combination of Methods for Urban Areas

Authors: Andrea Gion Saluz, Tal Hertig, Axel Heinrich, Stefan Stevanovic

Abstract:

Higher temperatures, seasonal changes in precipitation, and extreme weather events are increasingly affecting trees. To counteract the increasing challenges of urban trees, strategies are increasingly being sought to preserve existing tree populations on the one hand and to prepare for the coming years on the other. One such strategy lies in strategic climate tree species selection. The search is on for species or varieties that can cope with the new climatic conditions. Many efforts in German-speaking countries deal with this in detail, such as the tree lists of the German Conference of Garden Authorities (GALK), the project Stadtgrün 2021, or the instruments of the Climate Species Matrix by Prof. Dr. Roloff. In this context, different methods for a correct species selection are offered. One possibility is to select certain physiological attributes that indicate the climate resilience of a species. To calculate the dissimilarity of the present climate of different geographic regions in relation to the future climate of any city, a weighted (standardized) Euclidean distance (SED) for seasonal climate values is calculated for each region of the Earth. The calculation was performed in the QGIS geographic information system, using global raster datasets on monthly climate values in the 1981-2010 standard period. Data from a European forest inventory were used to identify tree species growing in the calculated analogue climate regions. The inventory used is the compilation of georeferenced point data at a 1 km grid resolution on the occurrence of tree species in 21 European countries. In this project, the results of the methodological application are shown for the city of Zurich for the year 2060. In the first step, analog climate regions based on projected climate values for the measuring station Kirche Fluntern (ZH) were searched for. In a further step, the methods mentioned above were applied to generate tree species lists for the city of Zurich. These lists were then qualitatively evaluated with respect to the suitability of the different tree species for the Zurich area to generate a cleaned and thus usable list of possible future tree species.

Keywords: climate change, climate region, climate tree, urban tree

Procedia PDF Downloads 72
11 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis

Authors: H. Jung, N. Kim, B. Kang, J. Choe

Abstract:

History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.

Keywords: history matching, principal component analysis, reservoir modelling, support vector machine

Procedia PDF Downloads 132
10 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder

Procedia PDF Downloads 258
9 Quantum Cum Synaptic-Neuronal Paradigm and Schema for Human Speech Output and Autism

Authors: Gobinathan Devathasan, Kezia Devathasan

Abstract:

Objective: To improve the current modified Broca-Wernicke-Lichtheim-Kussmaul speech schema and provide insight into autism. Methods: We reviewed the pertinent literature. Current findings, involving Brodmann areas 22, 46, 9,44,45,6,4 are based on neuropathology and functional MRI studies. However, in primary autism, there is no lucid explanation and changes described, whether neuropathology or functional MRI, appear consequential. Findings: We forward an enhanced model which may explain the enigma related to autism. Vowel output is subcortical and does need cortical representation whereas consonant speech is cortical in origin. Left lateralization is needed to commence the circuitry spin as our life have evolved with L-amino acids and left spin of electrons. A fundamental species difference is we are capable of three syllable-consonants and bi-syllable expression whereas cetaceans and songbirds are confined to single or dual consonants. The 4 key sites for speech are superior auditory cortex, Broca’s two areas, and the supplementary motor cortex. Using the Argand’s diagram and Reimann’s projection, we theorize that the Euclidean three dimensional synaptic neuronal circuits of speech are quantized to coherent waves, and then decoherence takes place at area 6 (spherical representation). In this quantum state complex, 3-consonant languages are instantaneously integrated and multiple languages can be learned, verbalized and differentiated. Conclusion: We postulate that evolutionary human speech is elevated to quantum interaction unlike cetaceans and birds to achieve the three consonants/bi-syllable speech. In classical primary autism, the sudden speech switches off and on noted in several cases could now be explained not by any anatomical lesion but failure of coherence. Area 6 projects directly into prefrontal saccadic area (8); and this further explains the second primary feature in autism: lack of eye contact. The third feature which is repetitive finger gestures, located adjacent to the speech/motor areas, are actual attempts to communicate with the autistic child akin to sign language for the deaf.

Keywords: quantum neuronal paradigm, cetaceans and human speech, autism and rapid magnetic stimulation, coherence and decoherence of speech

Procedia PDF Downloads 158
8 The Geometrical Cosmology: The Projective Cast of the Collective Subjectivity of the Chinese Traditional Architectural Drawings

Authors: Lina Sun

Abstract:

Chinese traditional drawings related to buildings and construction apply a unique geometry differentiating with western Euclidean geometry and embrace a collection of special terminologies, under the category of tu (the Chinese character for drawing). This paper will on one side etymologically analysis the terminologies of Chinese traditional architectural drawing, and on the other side geometrically deconstruct the composition of tu and locate the visual narrative language of tu in the pictorial tradition. The geometrical analysis will center on selected series of Yang-shi-lei tu of the construction of emperors’ mausoleums in Qing Dynasty (1636-1912), and will also draw out the earlier architectural drawings and the architectural paintings such as the jiehua, and paintings on religious frescoes and tomb frescoes as the comparison. By doing these, this research will reveal that both the terminologies corresponding to different geometrical forms respectively indicate associations between architectural drawing and the philosophy of Chinese cosmology, and the arrangement of the geometrical forms in the visual picture plane facilitates expressions of the concepts of space and position in the geometrical cosmology. These associations and expressions are the collective intentions of architectural drawing evolving in the thousands of years’ tradition without breakage and irrelevant to the individual authorship. Moreover, the architectural tu itself as an entity, not only functions as the representation of the buildings but also express intentions and strengthen them by using the Chinese unique geometrical language flexibly and intentionally. These collective cosmological spatial intentions and the corresponding geometrical words and languages reveal that the Chinese traditional architectural drawing functions as a unique architectural site with subjectivity which exists parallel with buildings and express intentions and meanings by itself. The methodology and the findings of this research will, therefore, challenge the previous researches which treat architectural drawings just as the representation of buildings and understand the drawings more than just using them as the evidence to reconstruct the information of buildings. Furthermore, this research will situate architectural drawing in between the researches of Chinese technological tu and artistic painting, bridging the two academic areas which usually treated the partial features of architectural drawing separately. Beyond this research, the collective subjectivity of the Chinese traditional drawings will facilitate the revealing of the transitional experience from traditions to drawing modernity, where the individual subjective identities and intentions of architects arise. This research will root for the understanding both the ambivalence and affinity of the drawing modernity encountering the traditions.

Keywords: Chinese traditional architectural drawing (tu), etymology of tu, collective subjectivity of tu, geometrical cosmology in tu, geometry and composition of tu, Yang-shi-lei tu

Procedia PDF Downloads 90
7 Residual Analysis and Ground Motion Prediction Equation Ranking Metrics for Western Balkan Strong Motion Database

Authors: Manuela Villani, Anila Xhahysa, Christopher Brooks, Marco Pagani

Abstract:

The geological structure of Western Balkans is strongly affected by the collision between Adria microplate and the southwestern Euroasia margin, resulting in a considerably active seismic region. The Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project (BSHAP) (2007-2011, 2012-2015) by NATO supported the preparation of new seismic hazard maps of the Western Balkan, but when inspecting the seismic hazard models produced later by these countries on a national scale, significant differences in design PGA values are observed in the border, for instance, North Albania-Montenegro, South Albania- Greece, etc. Considering the fact that the catalogues were unified and seismic sources were defined within BSHAP framework, obviously, the differences arise from the Ground Motion Prediction Equations selection, which are generally the component with highest impact on the seismic hazard assessment. At the time of the project, a modest database was present, namely 672 three-component records, whereas nowadays, this strong motion database has increased considerably up to 20,939 records with Mw ranging in the interval 3.7-7 and epicentral distance distribution from 0.47km to 490km. Statistical analysis of the strong motion database showed the lack of recordings in the moderate-to-large magnitude and short distance ranges; therefore, there is need to re-evaluate the Ground Motion Prediction Equation in light of the recently updated database and the new generations of GMMs. In some cases, it was observed that some events were more extensively documented in one database than the other, like the 1979 Montenegro earthquake, with a considerably larger number of records in the BSHAP Analogue SM database when compared to ESM23. Therefore, the strong motion flat-file provided from the Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project was merged with the ESM23 database for the polygon studied in this project. After performing the preliminary residual analysis, the candidate GMPE-s were identified. This process was done using the GMPE performance metrics available within the SMT in the OpenQuake Platform. The Likelihood Model and Euclidean Distance Based Ranking (EDR) were used. Finally, for this study, a GMPE logic tree was selected and following the selection of candidate GMPEs, model weights were assigned using the average sample log-likelihood approach of Scherbaum.

Keywords: residual analysis, GMPE, western balkan, strong motion, openquake

Procedia PDF Downloads 41