Search results for: probability matrix
3200 Cognitive Relaying in Interference Limited Spectrum Sharing Environment: Outage Probability and Outage Capacity
Authors: Md Fazlul Kader, Soo Young Shin
Abstract:
In this paper, we consider a cognitive relay network (CRN) in which the primary receiver (PR) is protected by peak transmit power $\bar{P}_{ST}$ and/or peak interference power Q constraints. In addition, the interference effect from the primary transmitter (PT) is considered to show its impact on the performance of the CRN. We investigate the outage probability (OP) and outage capacity (OC) of the CRN by deriving closed-form expressions over Rayleigh fading channel. Results show that both the OP and OC improve by increasing the cooperative relay nodes as well as when the PT is far away from the SR.Keywords: cognitive relay, outage, interference limited, decode-and-forward (DF)
Procedia PDF Downloads 5103199 Singularity Theory in Yakam Matrix by Multiparameter Bifurcation Interfacial in Coupled Problem
Authors: Leonard Kabeya Mukeba Yakasham
Abstract:
The theoretical machinery from singularity theory introduced by Glolubitsky, Stewart, and Schaeffer, to study equivariant bifurcation problem is completed and expanded wile generalized to the multiparameter context. In this setting the finite deterinancy theorem or normal forms, the stability of equivariant bifurcation problem, and the structural stability of universal unfolding are discussed. With Yakam Matrix the solutions are limited for some partial differential equations stochastic nonlinear of the open questions in singularity artificial intelligence for future.Keywords: equivariant bifurcation, symmetry singularity, equivariant jets and transversality; normal forms, universal unfolding instability, structural stability, artificial intelligence, pdens, yakam matrix
Procedia PDF Downloads 223198 Development of Visual Working Memory Precision: A Cross-Sectional Study of Simultaneously Delayed Responses Paradigm
Authors: Yao Fu, Xingli Zhang, Jiannong Shi
Abstract:
Visual working memory (VWM) capacity is the ability to maintain and manipulate short-term information which is not currently available. It is well known for its significance to form the basis of numerous cognitive abilities and its limitation in holding information. VWM span, the most popular measurable indicator, is found to reach the adult level (3-4 items) around 12-13 years’ old, while less is known about the precision development of the VWM capacity. By using simultaneously delayed responses paradigm, the present study investigates the development of VWM precision among 6-18-year-old children and young adults, besides its possible relationships with fluid intelligence and span. Results showed that precision and span both increased with age, and precision reached the maximum in 16-17 age-range. Moreover, when remembering 3 simultaneously presented items, the probability of remembering target item correlated with fluid intelligence and the probability of wrap errors (misbinding target and non-target items) correlated with age. When remembering more items, children had worse performance than adults due to their wrap errors. Compared to span, VWM precision was effective predictor of intelligence even after controlling for age. These results suggest that unlike VWM span, precision developed in a slow, yet longer fashion. Moreover, decreasing probability of wrap errors might be the main reason for the development of precision. Last, precision correlated more closely with intelligence than span in childhood and adolescence, which might be caused by the probability of remembering target item.Keywords: fluid intelligence, precision, visual working memory, wrap errors
Procedia PDF Downloads 2743197 Preparation of Alumina (Al2O3) Particles and MMCS of (Al-7% Si– 0.45% Mg) Alloy Using Vortex Method
Authors: Abdulmagid A. Khattabi
Abstract:
The aim of this research is to study the manner of alumina (Al2O3) particles dispersion with (2-10) mm size in (Al-7%Si-0.45% Mg) base of alloy melt employing of classical casting method. The mechanism of particles diffusions by melt turning and stirring that makes vortexes help the particles entrance in the matrix of base alloy also has been studied. The samples of metallic composites (MMCs) with dispersed particles percentages (4% - 6% - 8% - 10% - 15% and 20%) are prepared. The effect of the particles dispersion on the mechanical properties of produced samples were carried out by tension & hardness tests. It is found that the ultimate tensile strength of the produced composites can be increased by increasing the percentages of alumina particles in the matrix of the base alloy. It becomes (232 Mpa) at (20%) of added particles. The results showed that the average hardness of prepared samples increasing with increases the alumina content. Microstructure study of prepared samples was carried out. The results showed particles location and distribution of it in the matrix of base alloy. The dissolution of Alumina particles into liquid base alloy was clear in some cases.Keywords: base alloy, matrix, hardness, thermal properties, base metal MMCs
Procedia PDF Downloads 3523196 The Effect of Particle Porosity in Mixed Matrix Membrane Permeation Models
Authors: Z. Sadeghi, M. R. Omidkhah, M. E. Masoomi
Abstract:
The purpose of this paper is to examine gas transport behavior of mixed matrix membranes (MMMs) combined with porous particles. Main existing models are categorized in two main groups; two-phase (ideal contact) and three-phase (non-ideal contact). A new coefficient, J, was obtained to express equations for estimating effect of the particle porosity in two-phase and three-phase models. Modified models evaluates with existing models and experimental data using Matlab software. Comparison of gas permeability of proposed modified models with existing models in different MMMs shows a better prediction of gas permeability in MMMs.Keywords: mixed matrix membrane, permeation models, porous particles, porosity
Procedia PDF Downloads 3823195 Performance Comparison of Wideband Covariance Matrix Sparse Representation (W-CMSR) with Other Wideband DOA Estimation Methods
Authors: Sandeep Santosh, O. P. Sahu
Abstract:
In this paper, performance comparison of wideband covariance matrix sparse representation (W-CMSR) method with other existing wideband Direction of Arrival (DOA) estimation methods has been made.W-CMSR relies less on a priori information of the incident signal number than the ordinary subspace based methods.Consider the perturbation free covariance matrix of the wideband array output. The diagonal covariance elements are contaminated by unknown noise variance. The covariance matrix of array output is conjugate symmetric i.e its upper right triangular elements can be represented by lower left triangular ones.As the main diagonal elements are contaminated by unknown noise variance,slide over them and align the lower left triangular elements column by column to obtain a measurement vector.Simulation results for W-CMSR are compared with simulation results of other wideband DOA estimation methods like Coherent signal subspace method (CSSM), Capon, l1-SVD, and JLZA-DOA. W-CMSR separate two signals very clearly and CSSM, Capon, L1-SVD and JLZA-DOA fail to separate two signals clearly and an amount of pseudo peaks exist in the spectrum of L1-SVD.Keywords: W-CMSR, wideband direction of arrival (DOA), covariance matrix, electrical and computer engineering
Procedia PDF Downloads 4693194 Several Spectrally Non-Arbitrary Ray Patterns of Order 4
Authors: Ling Zhang, Feng Liu
Abstract:
A matrix is called a ray pattern matrix if its entries are either 0 or a ray in complex plane which originates from 0. A ray pattern A of order n is called spectrally arbitrary if the complex matrices in the ray pattern class of A give rise to all possible nth degree complex polynomial. Otherwise, it is said to be spectrally non-arbitrary ray pattern. We call that a spectrally arbitrary ray pattern A of order n is minimally spectrally arbitrary if any nonzero entry of A is replaced, then A is not spectrally arbitrary. In this paper, we find that is not spectrally arbitrary when n equals to 4 for any θ which is greater than or equal to 0 and less than or equal to n. In this article, we give several ray patterns A(θ) of order n that are not spectrally arbitrary for some θ which is greater than or equal to 0 and less than or equal to n. by using the nilpotent-Jacobi method. One example is given in our paper.Keywords: spectrally arbitrary, nilpotent matrix , ray patterns, sign patterns
Procedia PDF Downloads 1813193 Riesz Mixture Model for Brain Tumor Detection
Authors: Mouna Zitouni, Mariem Tounsi
Abstract:
This research introduces an application of the Riesz mixture model for medical image segmentation for accurate diagnosis and treatment of brain tumors. We propose a pixel classification technique based on the Riesz distribution, derived from an extended Bartlett decomposition. To our knowledge, this is the first study addressing this approach. The Expectation-Maximization algorithm is implemented for parameter estimation. A comparative analysis, using both synthetic and real brain images, demonstrates the superiority of the Riesz model over a recent method based on the Wishart distribution.Keywords: EM algorithm, segmentation, Riesz probability distribution, Wishart probability distribution
Procedia PDF Downloads 163192 Application of Heuristic Integration Ant Colony Optimization in Path Planning
Authors: Zeyu Zhang, Guisheng Yin, Ziying Zhang, Liguo Zhang
Abstract:
This paper mainly studies the path planning method based on ant colony optimization (ACO), and proposes heuristic integration ant colony optimization (HIACO). This paper not only analyzes and optimizes the principle, but also simulates and analyzes the parameters related to the application of HIACO in path planning. Compared with the original algorithm, the improved algorithm optimizes probability formula, tabu table mechanism and updating mechanism, and introduces more reasonable heuristic factors. The optimized HIACO not only draws on the excellent ideas of the original algorithm, but also solves the problems of premature convergence, convergence to the sub optimal solution and improper exploration to some extent. HIACO can be used to achieve better simulation results and achieve the desired optimization. Combined with the probability formula and update formula, several parameters of HIACO are tested. This paper proves the principle of the HIACO and gives the best parameter range in the research of path planning.Keywords: ant colony optimization, heuristic integration, path planning, probability formula
Procedia PDF Downloads 2483191 The Various Forms of a Soft Set and Its Extension in Medical Diagnosis
Authors: Biplab Singha, Mausumi Sen, Nidul Sinha
Abstract:
In order to deal with the impreciseness and uncertainty of a system, D. Molodtsov has introduced the concept of ‘Soft Set’ in the year 1999. Since then, a number of related definitions have been conceptualized. This paper includes a study on various forms of Soft Sets with examples. The paper contains the concepts of domain and co-domain of a soft set, conversion to one-one and onto function, matrix representation of a soft set and its relation with one-one function, upper and lower triangular matrix, transpose and Kernel of a soft set. This paper also gives the idea of the extension of soft sets in medical diagnosis. Here, two soft sets related to disease and symptoms are considered and using AND operation and OR operation, diagnosis of the disease is calculated through appropriate examples.Keywords: kernel of a soft set, soft set, transpose of a soft set, upper and lower triangular matrix of a soft set
Procedia PDF Downloads 3423190 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs
Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.
Abstract:
Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification
Procedia PDF Downloads 1243189 Assessing the Resilience of the Insurance Industry under Solvency II
Authors: Vincenzo Russo, Rosella Giacometti
Abstract:
The paper aims to assess the insurance industry's resilience under Solvency II against adverse scenarios. Starting from the economic balance sheet available under Solvency II for insurance and reinsurance undertakings, we assume that assets and liabilities follow a bivariate geometric Brownian motion (GBM). Then, using the results available under Margrabe's formula, we establish an analytical solution to calibrate the volatility of the asset-liability ratio. In such a way, we can estimate the probability of default and the probability of breaching the undertaking's Solvency Capital Requirement (SCR). Furthermore, since estimating the volatility of the Solvency Ratio became crucial for insurers in light of the financial crises featured in the last decades, we introduce a novel measure that we call Resiliency Ratio. The Resiliency Ratio can be used, in addition to the Solvency Ratio, to evaluate the insurance industry's resilience in case of adverse scenarios. Finally, we introduce a simplified stress test tool to evaluate the economic balance sheet under stressed conditions. The model we propose is featured by analytical tractability and fast calibration procedure where only the disclosed data available under the Solvency II public reporting are needed for the calibration. Using the data published regularly by the European Insurance and Occupational Pensions Authority (EIOPA) in an aggregated form by country, an empirical analysis has been performed to calibrate the model and provide the related results at the country level.Keywords: Solvency II, solvency ratio, volatility of the asset-liability ratio, probability of default, probability to breach the SCR, resilience ratio, stress test
Procedia PDF Downloads 813188 Inclusive Cities Decision Matrix Based on a Multidimensional Approach for Sustainable Smart Cities
Authors: Madhurima S. Waghmare, Shaleen Singhal
Abstract:
The concept of smartness, inclusion, sustainability is multidisciplinary and fuzzy, rooted in economic and social development theories and policies which get reflected in the spatial development of the cities. It is a challenge to convert these concepts from aspirations to transforming actions. There is a dearth of assessment and planning tools to support the city planners and administrators in developing smart, inclusive, and sustainable cities. To address this gap, this study develops an inclusive cities decision matrix based on an exploratory approach and using mixed methods. The matrix is soundly based on a review of multidisciplinary urban sector literature and refined and finalized based on inputs from experts and insights from case studies. The application of the decision matric on the case study cities in India suggests that the contemporary planning tools for cities need to be multidisciplinary and flexible to respond to the unique needs of the diverse contexts. The paper suggests that a multidimensional and inclusive approach to city planning can play an important role in building sustainable smart cities.Keywords: inclusive-cities decision matrix, smart cities in India, city planning tools, sustainable cities
Procedia PDF Downloads 1553187 Robust Control of a Single-Phase Inverter Using Linear Matrix Inequality Approach
Authors: Chivon Choeung, Heng Tang, Panha Soth, Vichet Huy
Abstract:
This paper presents a robust control strategy for a single-phase DC-AC inverter with an output LC-filter. An all-pass filter is utilized to create an artificial β-signal so that the proposed controller can be simply used in dq-synchronous frame. The proposed robust controller utilizes a state feedback control with integral action in the dq-synchronous frame. A linear matrix inequality-based optimization scheme is used to determine stabilizing gains of the controllers to maximize the convergence rate to steady state in the presence of uncertainties. The uncertainties of the system are described as the potential variation range of the inductance and resistance in the LC-filter.Keywords: single-phase inverter, linear matrix inequality, robust control, all-pass filter
Procedia PDF Downloads 1363186 The Lateral and Torsional Vibration Analysis of a Rotor-Bearing System Using Transfer Matrix Method
Authors: Mohammad Hadi Jalali, Mostafa Ghayour, Saeed Ziaei-Rad, Behrooz Shahriari
Abstract:
The vibration problems that can be occurred in the operational conditions of rotating machines may cause damage to the machine or even failure of the machine completely. Therefore, dynamic analysis of rotors is vital in the design and development stages of the rotating machines. In this study, the uncoupled torsional and lateral vibration analysis of a rotor-bearing system is carried out using transfer matrix method. The Campbell diagram, critical speed and the mode shape corresponding to the critical speed are obtained in order to evaluate the dynamic behavior of the rotor.Keywords: transfer matrix method, rotor-bearing system, campbell diagram, critical speed
Procedia PDF Downloads 4893185 Robust Segmentation of Salient Features in Automatic Breast Ultrasound (ABUS) Images
Authors: Lamees Nasser, Yago Diez, Robert Martí, Joan Martí, Ibrahim Sadek
Abstract:
Automated 3D breast ultrasound (ABUS) screening is a novel modality in medical imaging because of its common characteristics shared with other ultrasound modalities in addition to the three orthogonal planes (i.e., axial, sagittal, and coronal) that are useful in analysis of tumors. In the literature, few automatic approaches exist for typical tasks such as segmentation or registration. In this work, we deal with two problems concerning ABUS images: nipple and rib detection. Nipple and ribs are the most visible and salient features in ABUS images. Determining the nipple position plays a key role in some applications for example evaluation of registration results or lesion follow-up. We present a nipple detection algorithm based on color and shape of the nipple, besides an automatic approach to detect the ribs. In point of fact, rib detection is considered as one of the main stages in chest wall segmentation. This approach consists of four steps. First, images are normalized in order to minimize the intensity variability for a given set of regions within the same image or a set of images. Second, the normalized images are smoothed by using anisotropic diffusion filter. Next, the ribs are detected in each slice by analyzing the eigenvalues of the 3D Hessian matrix. Finally, a breast mask and a probability map of regions detected as ribs are used to remove false positives (FP). Qualitative and quantitative evaluation obtained from a total of 22 cases is performed. For all cases, the average and standard deviation of the root mean square error (RMSE) between manually annotated points placed on the rib surface and detected points on rib borders are 15.1188 mm and 14.7184 mm respectively.Keywords: Automated 3D Breast Ultrasound, Eigenvalues of Hessian matrix, Nipple detection, Rib detection
Procedia PDF Downloads 3293184 Occupational Attainment of Second Generation of Ethnic Minority Immigrants in the UK
Authors: Rukhsana Kausar, Issam Malki
Abstract:
The integration and assimilation of ethnic minority immigrants (EMIs) and their subsequent generations remains a serious unsettled issue in most of the host countries. This study conducts the labour market gender analysis to investigate specifically whether second generation of ethnic minority immigrants in the UK is gaining access to professional and managerial employment and advantaged occupational positions on par with their native counterparts. The data used to examine the labour market achievements of EMIs is taken from Labour Force Survey (LFS) for the period 2014-2018. We apply a multivalued treatment under ignorability as proposed by Cattaneo (2010), which refers to treatment effects under the assumptions of (i) selection – on – observables and (ii) common support. We report estimates of Average Treatment Effect (ATE), Average Treatment Effect on the Treated (ATET), and Potential Outcomes Means (POM) using three estimators, including the Regression Adjustment (RA), Augmented Inverse Probability Weighting (AIPW) and Inverse Probability Weighting- Regression Adjustment (IPWRA). We consider two cases: the case with four categories where the first-generation natives are the base category, the second case combine all natives as a base group. Our findings suggest the following. Under Case 1, the estimated probabilities and differences across groups are consistently similar and highly significant. As expected, first generation natives have the highest probability for higher career attainment among both men and women. The findings also suggest that first generation immigrants perform better than the remaining two groups, including the second-generation natives and immigrants. Furthermore, second generation immigrants have higher probability to attain higher professional career, while this is lower for a managerial career. Similar conclusions are reached under Case 2. That is to say that both first – generation and second – generation immigrants have a lower probability for higher career and managerial attainment. First – generation immigrants are found to perform better than second – generation immigrants.Keywords: immigrnats, second generation, occupational attainment, ethnicity
Procedia PDF Downloads 1053183 A Single Loop Repetitive Controller for a Four Legs Matrix Converter Unit
Authors: Wesam Rohouma
Abstract:
The aim of this paper is to investigate the use of repetitive controller to regulate the output voltage of three phase four leg matric converter for an Aircraft Ground Power Supply Unit. The proposed controller improve the steady state error and provide good regulation during different loading. Simulation results of 7.5 KW converter are presented to verify the operation of the proposed controller.Keywords: matrix converter, Power electronics, controller, regulation
Procedia PDF Downloads 15033182 Probabilistic Health Risk Assessment of Polycyclic Aromatic Hydrocarbons in Repeatedly Used Edible Oils and Finger Foods
Authors: Suraj Sam Issaka, Anita Asamoah, Abass Gibrilla, Joseph Richmond Fianko
Abstract:
Polycyclic aromatic hydrocarbons (PAHs) are a group of organic compounds that can form in edible oils during repeated frying and accumulate in fried foods. This study assesses the chances of health risks (carcinogenic and non-carcinogenic) due to PAHs levels in popular finger foods (bean cakes, plantain chips, doughnuts) fried in edible oils (mixed vegetable, sunflower, soybean) from the Ghanaian market. Employing probabilistic health risk assessment that considers variability and uncertainty in exposure and risk estimates provides a more realistic representation of potential health risks. Monte Carlo simulations with 10,000 iterations were used to estimate carcinogenic, mutagenic, and non-carcinogenic risks for different age groups (A: 6-10 years, B: 11-20 years, C: 20-70 years), food types (bean cake, plantain chips, doughnut), oil types (soybean, mixed vegetable, sunflower), and re-usage frying oil frequencies (once, twice, thrice). Our results suggest that, for age Group A, doughnuts posed the highest probability of carcinogenic risk (91.55%) exceeding the acceptable threshold, followed by bean cakes (43.87%) and plantain chips (7.72%), as well as the highest probability of unacceptable mutagenic risk (89.2%), followed by bean cakes (40.32%). Among age Group B, doughnuts again had the highest probability of exceeding carcinogenic risk limits (51.16%) and mutagenic risk limits (44.27%). At the same time, plantain chips exhibited the highest maximum carcinogenic risk. For adults age Group C, bean cakes had the highest probability of unacceptable carcinogenic (50.88%) and mutagenic risks (46.44%), though plantain chips showed the highest maximum values for both carcinogenic and mutagenic risks in this age group. Also, on non-carcinogenic risks across different age groups, it was found that age Group A) who consumed doughnuts had a 68.16% probability of a hazard quotient (HQ) greater than 1, suggesting potential cognitive impairment and lower IQ scores due to early PAH exposure. This group also faced risks from consuming plantain chips and bean cake. For age Group B, the consumption of plantain chips was associated with a 36.98% probability of HQ greater than 1, indicating a potential risk of reduced lung function. In age Group C, the consumption of plantain chips was linked to a 35.70% probability of HQ greater than 1, suggesting a potential risk of cardiovascular diseases.Keywords: PAHs, fried foods, carcinogenic risk, non-carcinogenic risk, Monte Carlo simulations
Procedia PDF Downloads 113181 Identification of Outliers in Flood Frequency Analysis: Comparison of Original and Multiple Grubbs-Beck Test
Authors: Ayesha S. Rahman, Khaled Haddad, Ataur Rahman
Abstract:
At-site flood frequency analysis is used to estimate flood quantiles when at-site record length is reasonably long. In Australia, FLIKE software has been introduced for at-site flood frequency analysis. The advantage of FLIKE is that, for a given application, the user can compare a number of most commonly adopted probability distributions and parameter estimation methods relatively quickly using a windows interface. The new version of FLIKE has been incorporated with the multiple Grubbs and Beck test which can identify multiple numbers of potentially influential low flows. This paper presents a case study considering six catchments in eastern Australia which compares two outlier identification tests (original Grubbs and Beck test and multiple Grubbs and Beck test) and two commonly applied probability distributions (Generalized Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE software. It has been found that the multiple Grubbs and Beck test when used with LP3 distribution provides more accurate flood quantile estimates than when LP3 distribution is used with the original Grubbs and Beck test. Between these two methods, the differences in flood quantile estimates have been found to be up to 61% for the six study catchments. It has also been found that GEV distribution (with L moments) and LP3 distribution with the multiple Grubbs and Beck test provide quite similar results in most of the cases; however, a difference up to 38% has been noted for flood quantiles for annual exceedance probability (AEP) of 1 in 100 for one catchment. These findings need to be confirmed with a greater number of stations across other Australian states.Keywords: floods, FLIKE, probability distributions, flood frequency, outlier
Procedia PDF Downloads 4493180 Degree of Approximation of Functions Conjugate to Periodic Functions Belonging to Lipschitz Classes by Product Matrix Means
Authors: Smita Sonker
Abstract:
Various investigators have determined the degree of approximation of conjugate signals (functions) of functions belonging to different classes Lipα, Lip(α,p), Lip(ξ(t),p), W(Lr,ξ(t), (β ≥ 0)) by matrix summability means, lower triangular matrix operator, product means (i.e. (C,1)(E,1), (C,1)(E,q), (E,q)(C,1) (N,p,q)(E,1), and (E,q)(N,pn) of their conjugate trigonometric Fourier series. In this paper, we shall determine the degree of approximation of 2π-periodic function conjugate functions of f belonging to the function classes Lipα and W(Lr; ξ(t); (β ≥ 0)) by (C1.T) -means of their conjugate trigonometric Fourier series. On the other hand, we shall review above-mentioned work in the light of Lenski.Keywords: signals, trigonometric fourier approximation, class W(L^r, \xi(t), conjugate fourier series
Procedia PDF Downloads 3953179 Performance Analysis and Optimization for Diagonal Sparse Matrix-Vector Multiplication on Machine Learning Unit
Authors: Qiuyu Dai, Haochong Zhang, Xiangrong Liu
Abstract:
Diagonal sparse matrix-vector multiplication is a well-studied topic in the fields of scientific computing and big data processing. However, when diagonal sparse matrices are stored in DIA format, there can be a significant number of padded zero elements and scattered points, which can lead to a degradation in the performance of the current DIA kernel. This can also lead to excessive consumption of computational and memory resources. In order to address these issues, the authors propose the DIA-Adaptive scheme and its kernel, which leverages the parallel instruction sets on MLU. The researchers analyze the effect of allocating a varying number of threads, clusters, and hardware architectures on the performance of SpMV using different formats. The experimental results indicate that the proposed DIA-Adaptive scheme performs well and offers excellent parallelism.Keywords: adaptive method, DIA, diagonal sparse matrices, MLU, sparse matrix-vector multiplication
Procedia PDF Downloads 1323178 Mechanical Properties of Fibre Reinforced High Performance Concrete
Authors: Laura Dembovska, Diana Bajare, Vitalijs Lusis, Genadijs Sahmenko, Aleksandrs Korjakins
Abstract:
This study focused on the mechanical properties of the fibre reinforced High Performance Concrete. The most important benefits of addition of fibres to the concrete mix are the hindrance of the development of microcracks, the delay of the propagation of microcracks to macroscopic cracks and the better ductility after microcracks have been occurred. This work presents an extensive comparative experimental study on six different types of fibres (alkali resistant glass, polyvinyl alcohol fibres, polypropylene fibres and carbon fibres) with the same binding High Performance Concrete matrix. The purpose was to assess the influence of the type of fibre on the mechanical properties of Fibre Reinforced High Performance Concrete. Therefore, in this study three main objectives have been chosen: 1) analyze the structure of the bulk cementitious matrix, 2) determine the influence of fibres and distribution in the matrix on the mechanical properties of fibre reinforced High Performance Concrete and 3) characterize the microstructure of the fibre-matrix interface. Acknowledgement: This study was partially funded by European Regional Development Fund project Nr.1.1.1.1/16/A/007 “A New Concept for Sustainable and Nearly Zero-Energy Buildings” and COST Action TU1404 Conference grants project.Keywords: high performance concrete, fibres, mechanical properties, microstructure
Procedia PDF Downloads 2813177 Forecast Based on an Empirical Probability Function with an Adjusted Error Using Propagation of Error
Authors: Oscar Javier Herrera, Manuel Angel Camacho
Abstract:
This paper addresses a cutting edge method of business demand forecasting, based on an empirical probability function when the historical behavior of the data is random. Additionally, it presents error determination based on the numerical method technique ‘propagation of errors’. The methodology was conducted characterization and process diagnostics demand planning as part of the production management, then new ways to predict its value through techniques of probability and to calculate their mistake investigated, it was tools used numerical methods. All this based on the behavior of the data. This analysis was determined considering the specific business circumstances of a company in the sector of communications, located in the city of Bogota, Colombia. In conclusion, using this application it was possible to obtain the adequate stock of the products required by the company to provide its services, helping the company reduce its service time, increase the client satisfaction rate, reduce stock which has not been in rotation for a long time, code its inventory, and plan reorder points for the replenishment of stock.Keywords: demand forecasting, empirical distribution, propagation of error, Bogota
Procedia PDF Downloads 6293176 Association between Healthy Eating Index-2015 Scores and the Probability of Sarcopenia in Community-Dwelling Iranian Elderly
Authors: Zahra Esmaeily, Zahra Tajari, Shahrzad Daei, Mahshid Rezaei, Atefeh Eyvazkhani, Marjan Mansouri Dara, Ahmad Reza Dorosty Motlagh, Andriko Palmowski
Abstract:
Objective: Sarcopenia (SPA) is associated with frailty and disability in the elderly. Adherence to current dietary guidelines in addition to physical activity could play a role in the prevention of muscle wasting and weakness. The Healthy Eating Index-2015 (HEI) is a tool to assess diet quality as recommended in the U.S. Dietary Guidelines for Americans. This study aimed to investigate whether there is a relationship between HEI scores and the probability of SPA (PS) among the Tehran elderly. Method: A previously validated semi-quantitative food frequency questionnaire was used to assess HEI and the dietary intake of randomly selected elderly people living in Tehran, Iran. Handgrip strength (HGS) was measured to evaluate the PS. Statistical evaluation included descriptive analysis and standard test procedures. Result: 201 subjects were included. Those probably suffering from SPA (as determined by HGS) had significantly lower HEI scores (p = 0.02). After adjusting for confounders, HEI scores and HGS were still significantly associated (adjusted R2 = 0.56, slope β = 0.03, P = 0.09). Elderly people with a low probability of SPA consumed more monounsaturated and polyunsaturated fatty acids (P = 0.06) and ingested less added sugars and saturated fats (P = 0.01 and P = 0.02, respectively). Conclusion: In this cross-sectional study, HEI scores are associated with the probability of SPA. Adhering to current dietary guidelines might contribute to ameliorating muscle strength and mass in aging individuals.Keywords: aging, HEI-2015, Iranian, sarcopenic
Procedia PDF Downloads 2023175 Microscopic Analysis of Bulk, High-Tc Superconductors by Transmission Kikuchi Diffraction
Authors: Anjela Koblischka-Veneva, Michael R. Koblischka
Abstract:
In this contribution, the Transmission-Kikuchi Diffraction (TKD, or sometimes called t-EBSD) is applied to bulk, melt-grown YBa₂Cu₃O₇ (YBCO) superconductors prepared by the MTMG (melt-textured melt-grown) technique and the infiltration growth (IG) technique. TEM slices required for the analysis were prepared by means of Focused Ion-Beam (FIB) milling using mechanically polished sample surfaces, which enable a proper selection of the interesting regions for investigations. The required optical transparency was reached by an additional polishing step of the resulting surfaces using FIB-Ga-ion and Ar-ion milling. The improved spatial resolution of TKD enabled the investigation of the tiny YBa₂Cu₃O₅ (Y-211) particles having a diameter of about 50-100 nm embedded within the YBCO matrix and of other added secondary phase particles. With the TKD technique, the microstructural properties of the YBCO matrix are studied in detail. It is observed that the matrix shows the effects of stress/strain, depending on the size and distribution of the embedded particles, which are important for providing additional flux pinning centers in such superconducting bulk samples. Using the Kernel Average Misorientation (KAM) maps, the strain induced in the superconducting matrix around the particles, which increases the flux pinning effectivity, can be clearly revealed. This type of analysis of the EBSD/TKD data is, therefore, also important for other material systems, where nanoparticles are embedded in a matrix.Keywords: transmission Kikuchi diffraction, EBSD, TKD, embedded particles, superconductors YBa₂Cu₃O₇
Procedia PDF Downloads 1333174 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 1213173 Secured Power flow Algorithm Including Economic Dispatch with GSDF Matrix Using LabVIEW
Authors: Slimane Souag, Amel Graa, Farid Benhamida
Abstract:
In this paper we present a new method for solving the secured power flow problem by the economic dispatch using DC power flow method and Generation Shift Distribution Factor (GSDF), in this work we create a graphical interface in LabVIEW as a virtual instrument. Hence the dc power flow reduces the power flow problem to a set of linear equations, which make the iterative calculation very fast and the GSFD matrix present the effects of single and multiple generator MW change on the transmission line. The effectiveness of the method developed is identified through its application to an IEEE-14 bus test system. The calculation results show excellent performance of the proposed method, in regard to computation time and quality of results.Keywords: electrical power system security, economic dispatch, sensitivity matrix, labview
Procedia PDF Downloads 4883172 Trace Network: A Probabilistic Relevant Pattern Recognition Approach to Attribution Trace Analysis
Authors: Jian Xu, Xiaochun Yun, Yongzheng Zhang, Yafei Sang, Zhenyu Cheng
Abstract:
Network attack prevention is a critical research area of information security. Network attack would be oppressed if attribution techniques are capable to trace back to the attackers after the hacking event. Therefore attributing these attacks to a particular identification becomes one of the important tasks when analysts attempt to differentiate and profile the attacker behind a piece of attack trace. To assist analysts in expose attackers behind the scenes, this paper researches on the connections between attribution traces and proposes probabilistic relevance based attribution patterns. This method facilitates the evaluation of the plausibility relevance between different traceable identifications. Furthermore, through analyzing the connections among traces, it could confirm the existence probability of a certain organization as well as discover its affinitive partners by the means of drawing relevance matrix from attribution traces.Keywords: attribution trace, probabilistic relevance, network attack, attacker identification
Procedia PDF Downloads 3653171 Designing Information Systems in Education as Prerequisite for Successful Management Results
Authors: Vladimir Simovic, Matija Varga, Tonco Marusic
Abstract:
This research paper shows matrix technology models and examples of information systems in education (in the Republic of Croatia and in the Germany) in support of business, education (when learning and teaching) and e-learning. Here we researched and described the aims and objectives of the main process in education and technology, with main matrix classes of data. In this paper, we have example of matrix technology with detailed description of processes related to specific data classes in the processes of education and an example module that is support for the process: ‘Filling in the directory and the diary of work’ and ‘evaluation’. Also, on the lower level of the processes, we researched and described all activities which take place within the lower process in education. We researched and described the characteristics and functioning of modules: ‘Fill the directory and the diary of work’ and ‘evaluation’. For the analysis of the affinity between the aforementioned processes and/or sub-process we used our application model created in Visual Basic, which was based on the algorithm for analyzing the affinity between the observed processes and/or sub-processes.Keywords: designing, education management, information systems, matrix technology, process affinity
Procedia PDF Downloads 437