Search results for: probability matrix
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3419

Search results for: probability matrix

2909 Some Results for F-Minimal Hypersurfaces in Manifolds with Density

Authors: M. Abdelmalek

Abstract:

In this work, we study the hypersurfaces of constant weighted mean curvature embedded in weighted manifolds. We give a condition about these hypersurfaces to be minimal. This condition is given by the ellipticity of the weighted Newton transformations. We especially prove that two compact hypersurfaces of constant weighted mean curvature embedded in space forms and with the intersection in at least a point of the boundary must be transverse. The method is based on the calculus of the matrix of the second fundamental form in a boundary point and then the matrix associated with the Newton transformations. By equality, we find the weighted elementary symmetric function on the boundary of the hypersurface. We give in the end some examples and applications. Especially in Euclidean space, we use the above result to prove the Alexandrov spherical caps conjecture for the weighted case.

Keywords: weighted mean curvature, weighted manifolds, ellipticity, Newton transformations

Procedia PDF Downloads 93
2908 Profit-Based Artificial Neural Network (ANN) Trained by Migrating Birds Optimization: A Case Study in Credit Card Fraud Detection

Authors: Ashkan Zakaryazad, Ekrem Duman

Abstract:

A typical classification technique ranks the instances in a data set according to the likelihood of belonging to one (positive) class. A credit card (CC) fraud detection model ranks the transactions in terms of probability of being fraud. In fact, this approach is often criticized, because firms do not care about fraud probability but about the profitability or costliness of detecting a fraudulent transaction. The key contribution in this study is to focus on the profit maximization in the model building step. The artificial neural network proposed in this study works based on profit maximization instead of minimizing the error of prediction. Moreover, some studies have shown that the back propagation algorithm, similar to other gradient–based algorithms, usually gets trapped in local optima and swarm-based algorithms are more successful in this respect. In this study, we train our profit maximization ANN using the Migrating Birds optimization (MBO) which is introduced to literature recently.

Keywords: neural network, profit-based neural network, sum of squared errors (SSE), MBO, gradient descent

Procedia PDF Downloads 475
2907 Aperiodic and Asymmetric Fibonacci Quasicrystals: Next Big Future in Quantum Computation

Authors: Jatindranath Gain, Madhumita DasSarkar, Sudakshina Kundu

Abstract:

Quantum information is stored in states with multiple quasiparticles, which have a topological degeneracy. Topological quantum computation is concerned with two-dimensional many body systems that support excitations. Anyons are elementary building block of quantum computations. When anyons tunneling in a double-layer system can transition to an exotic non-Abelian state and produce Fibonacci anyons, which are powerful enough for universal topological quantum computation (TQC).Here the exotic behavior of Fibonacci Superlattice is studied by using analytical transfer matrix methods and hence Fibonacci anyons. This Fibonacci anyons can build a quantum computer which is very emerging and exciting field today’s in Nanophotonics and quantum computation.

Keywords: quantum computing, quasicrystals, Multiple Quantum wells (MQWs), transfer matrix method, fibonacci anyons, quantum hall effect, nanophotonics

Procedia PDF Downloads 390
2906 An Efficient Approach for Speed up Non-Negative Matrix Factorization for High Dimensional Data

Authors: Bharat Singh Om Prakash Vyas

Abstract:

Now a day’s applications deal with High Dimensional Data have tremendously used in the popular areas. To tackle with such kind of data various approached has been developed by researchers in the last few decades. To tackle with such kind of data various approached has been developed by researchers in the last few decades. One of the problems with the NMF approaches, its randomized valued could not provide absolute optimization in limited iteration, but having local optimization. Due to this, we have proposed a new approach that considers the initial values of the decomposition to tackle the issues of computationally expensive. We have devised an algorithm for initializing the values of the decomposed matrix based on the PSO (Particle Swarm Optimization). Through the experimental result, we will show the proposed method converse very fast in comparison to other row rank approximation like simple NMF multiplicative, and ACLS techniques.

Keywords: ALS, NMF, high dimensional data, RMSE

Procedia PDF Downloads 343
2905 A Pull-Out Fiber/Matrix Interface Characterization of Vegetal Fibers Reinforced Thermoplastic Polymer Composites, the Influence of the Processing Temperature

Authors: Duy Cuong Nguyen, Ali Makke, Guillaume Montay

Abstract:

This work presents an improved single fiber pull-out test for fiber/matrix interface characterization. This test has been used to study the Inter-Facial Shear Strength ‘IFSS’ of hemp fibers reinforced polypropylene (PP). For this aim, the fiber diameter has been carefully measured using a tomography inspired method. The fiber section contour can then be approximated by a circle or a polygon. The results show that the IFSS is overestimated if the circular approximation is used. The Influence of the molding temperature on the IFSS has also been studied. We find a molding temperature of 183°C leads to better interface properties. Above or below this temperature the interface strength is reduced.

Keywords: composite, hemp, interface, pull-out, processing, polypropylene, temperature

Procedia PDF Downloads 392
2904 Dynamic Reroute Modeling for Emergency Evacuation: Case Study of Brunswick City, Germany

Authors: Yun-Pang Flötteröd, Jakob Erdmann

Abstract:

The human behaviors during evacuations are quite complex. One of the critical behaviors which affect the efficiency of evacuation is route choice. Therefore, the respective simulation modeling work needs to function properly. In this paper, Simulation of Urban Mobility’s (SUMO) current dynamic route modeling during evacuation, i.e. the rerouting functions, is examined with a real case study. The result consistency of the simulation and the reality is checked as well. Four influence factors (1) time to get information, (2) probability to cancel a trip, (3) probability to use navigation equipment, and (4) rerouting and information updating period are considered to analyze possible traffic impacts during the evacuation and to examine the rerouting functions in SUMO. Furthermore, some behavioral characters of the case study are analyzed with use of the corresponding detector data and applied in the simulation. The experiment results show that the dynamic route modeling in SUMO can deal with the proposed scenarios properly. Some issues and function needs related to route choice are discussed and further improvements are suggested.

Keywords: evacuation, microscopic traffic simulation, rerouting, SUMO

Procedia PDF Downloads 194
2903 System of Linear Equations, Gaussian Elimination

Authors: Rabia Khan, Nargis Munir, Suriya Gharib, Syeda Roshana Ali

Abstract:

In this paper linear equations are discussed in detail along with elimination method. Gaussian elimination and Gauss Jordan schemes are carried out to solve the linear system of equation. This paper comprises of matrix introduction, and the direct methods for linear equations. The goal of this research was to analyze different elimination techniques of linear equations and measure the performance of Gaussian elimination and Gauss Jordan method, in order to find their relative importance and advantage in the field of symbolic and numeric computation. The purpose of this research is to revise an introductory concept of linear equations, matrix theory and forms of Gaussian elimination through which the performance of Gauss Jordan and Gaussian elimination can be measured.

Keywords: direct, indirect, backward stage, forward stage

Procedia PDF Downloads 598
2902 Dynamics of Adiabatic Rapid Passage in an Open Rabi Dimer Model

Authors: Justin Zhengjie Tan, Yang Zhao

Abstract:

Adiabatic Rapid Passage, a popular method of achieving population inversion, is studied in a Rabi dimer model in the presence of noise which acts as a dissipative environment. The integration of the multi-Davydov D2 Ansatz into the time-dependent variational framework enables us to model the intricate quantum system accurately. By influencing the system with a driving field strength resonant with the energy spacing, the probability of adiabatic rapid passage, which is modelled after the Landau Zener model, can be derived along with several other observables, such as the photon population. The effects of a dissipative environment can be reproduced by coupling the system to a common phonon mode. By manipulating the strength and frequency of the driving field, along with the coupling strength of the phonon mode to the qubits, we are able to control the qubits and photon dynamics and subsequently increase the probability of Adiabatic Rapid Passage happening.

Keywords: quantum electrodynamics, adiabatic rapid passage, Landau-Zener transitions, dissipative environment

Procedia PDF Downloads 87
2901 Metallic-Diamond Tools with Increased Abrasive Wear Resistance for Grinding Industrial Floor Systems

Authors: Elżbieta Cygan, Bączek, Piotr Wyżga

Abstract:

This paper presents the results of research on the physical, mechanical, and tribological properties of materials constituting the matrix in sintered metallic-diamond tools. The ground powders based on the Fe-Mn-Cu-Sn-C system were modified with micro-sized particles of the ceramic phase: SiC, Al₂O₃ and consolidated using the SPS (spark plasma sintering) method to a relative density of over 98% at 850-950°C, at a pressure of 35 MPa and time 10 min. After sintering, an analysis of the microstructure was conducted using scanning electron microscopy. The resulting materials were tested for the apparent density determined by Archimedes’ method, Rockwell hardness (scale B), Young’s modulus, as well as for technological properties. The performance results of obtained diamond composites were compared with the base material (Fe–Mn–Cu–Sn–C) and the commercial alloy Co-20% WC. The hardness of composites has achieved the maximum at a temperature of 900°C; therefore, it should be considered that at this temperature it was obtained optimal physical and mechanical properties of the subjects' composites were. Research on tribological properties showed that the composites modified with micro-sized particles of the ceramic phase are characterized by more than twice higher wear resistance in comparison with base materials and the commercial alloy Co-20% WC. Composites containing Al₂O₃ phase particles in the matrix material were composites containing Al₂O₃ phase particles in the matrix material were characterized by the lowest abrasion wear resistance. The manufacturing technology presented in the paper is economically justified and can be successfully used in the production process of the matrix in sintered diamond-impregnated tools used for the machining of an industrial floor system. Acknowledgment: The study was performed under LIDER IX Research Project No. LIDER/22/0085/L-9/17/NCBR/2018 entitled “Innovative metal-diamond tools without the addition of critical raw materials for applications in the process of grinding industrial floor systems” funded by the National Centre for Research and Development of Poland, Warsaw.

Keywords: abrasive wear resistance, metal matrix composites, sintered diamond tools, Spark Plasma Sintering

Procedia PDF Downloads 78
2900 A Comparative Study on Creep Modeling in Composites

Authors: Roham Rafiee, Behzad Mazhari

Abstract:

Composite structures, having incredible properties, have gained considerable popularity in the last few decades. Among all types, polymer matrix composites are being used extensively due to their unique characteristics including low weight, convenient fabrication process and low cost. Having polymer as matrix, these type of composites show different creep behavior when compared to metals and even other types of composites since most polymers undergo creep even in room temperature. One of the most challenging topics in creep is to introduce new techniques for predicting long term creep behavior of materials. Depending on the material which is being studied the appropriate method would be different. Methods already proposed for predicting long term creep behavior of polymer matrix composites can be divided into five categories: (1) Analytical Modeling, (2) Empirical Modeling, (3) Superposition Based Modeling (Semi-empirical), (4) Rheological Modeling, (5) Finite Element Modeling. Each of these methods has individual characteristics. Studies have shown that none of the mentioned methods can predict long term creep behavior of all PMC composites in all circumstances (loading, temperature, etc.) but each of them has its own priority in different situations. The reason to this issue can be found in theoretical basis of these methods. In this study after a brief review over the background theory of each method, they are compared in terms of their applicability in predicting long-term behavior of composite structures. Finally, the explained materials are observed through some experimental studies executed by other researchers.

Keywords: creep, comparative study, modeling, composite materials

Procedia PDF Downloads 442
2899 A Framework for Designing Complex Product-Service Systems with a Multi-Domain Matrix

Authors: Yoonjung An, Yongtae Park

Abstract:

Offering a Product-Service System (PSS) is a well-accepted strategy that companies may adopt to provide a set of systemic solutions to customers. PSSs were initially provided in a simple form but now take diversified and complex forms involving multiple services, products and technologies. With the growing interest in the PSS, frameworks for the PSS development have been introduced by many researchers. However, most of the existing frameworks fail to examine various relations existing in a complex PSS. Since designing a complex PSS involves full integration of multiple products and services, it is essential to identify not only product-service relations but also product-product/ service-service relations. It is also equally important to specify how they are related for better understanding of the system. Moreover, as customers tend to view their purchase from a more holistic perspective, a PSS should be developed based on the whole system’s requirements, rather than focusing only on the product requirements or service requirements. Thus, we propose a framework to develop a complex PSS that is coordinated fully with the requirements of both worlds. Specifically, our approach adopts a multi-domain matrix (MDM). A MDM identifies not only inter-domain relations but also intra-domain relations so that it helps to design a PSS that includes highly desired and closely related core functions/ features. Also, various dependency types and rating schemes proposed in our approach would help the integration process.

Keywords: inter-domain relations, intra-domain relations, multi-domain matrix, product-service system design

Procedia PDF Downloads 642
2898 Modeling of Glycine Transporters in Mammalian Using the Probability Approach

Authors: K. S. Zaytsev, Y. R. Nartsissov

Abstract:

Glycine is one of the key inhibitory neurotransmitters in Central nervous system (CNS) meanwhile glycinergic transmission is highly dependable on its appropriate reuptake from synaptic cleft. Glycine transporters (GlyT) of types 1 and 2 are the enzymes providing glycine transport back to neuronal and glial cells along with Na⁺ and Cl⁻ co-transport. The distribution and stoichiometry of GlyT1 and GlyT2 differ in details, and GlyT2 is more interesting for the research as it reuptakes glycine to neuron cells, whereas GlyT1 is located in glial cells. In the process of GlyT2 activity, the translocation of the amino acid is accompanied with binding of both one chloride and three sodium ions consequently (two sodium ions for GlyT1). In the present study, we developed a computer simulator of GlyT2 and GlyT1 activity based on known experimental data for quantitative estimation of membrane glycine transport. The trait of a single protein functioning was described using the probability approach where each enzyme state was considered separately. Created scheme of transporter functioning realized as a consequence of elemental steps allowed to take into account each event of substrate association and dissociation. Computer experiments using up-to-date kinetic parameters allowed receiving the number of translocated glycine molecules, Na⁺ and Cl⁻ ions per time period. Flexibility of developed software makes it possible to evaluate glycine reuptake pattern in time under different internal characteristics of enzyme conformational transitions. We investigated the behavior of the system in a wide range of equilibrium constant (from 0.2 to 100), which is not determined experimentally. The significant influence of equilibrium constant in the range from 0.2 to 10 on the glycine transfer process is shown. The environmental conditions such as ion and glycine concentrations are decisive if the values of the constant are outside the specified range.

Keywords: glycine, inhibitory neurotransmitters, probability approach, single protein functioning

Procedia PDF Downloads 119
2897 Probability Sampling in Matched Case-Control Study in Drug Abuse

Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell

Abstract:

Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.

Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling

Procedia PDF Downloads 493
2896 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength

Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos

Abstract:

Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.

Keywords: statistical slope stability analysis, skew distributions, probability of failure, functions of random variables

Procedia PDF Downloads 339
2895 CE Method for Development of Japan's Stochastic Earthquake Catalogue

Authors: Babak Kamrani, Nozar Kishi

Abstract:

Stochastic catalog represents the events module of the earthquake loss estimation models. It includes series of events with different magnitudes and corresponding frequencies/probabilities. For the development of the stochastic catalog, random or uniform sampling methods are used to sample the events from the seismicity model. For covering all the Magnitude Frequency Distribution (MFD), a huge number of events should be generated for the above-mentioned methods. Characteristic Event (CE) method chooses the events based on the interest of the insurance industry. We divide the MFD of each source into bins. We have chosen the bins based on the probability of the interest by the insurance industry. First, we have collected the information for the available seismic sources. Sources are divided into Fault sources, subduction, and events without specific fault source. We have developed the MFD for each of the individual and areal source based on the seismicity of the sources. Afterward, we have calculated the CE magnitudes based on the desired probability. To develop the stochastic catalog, we have introduced uncertainty to the location of the events too.

Keywords: stochastic catalogue, earthquake loss, uncertainty, characteristic event

Procedia PDF Downloads 300
2894 Error Probability of Multi-User Detection Techniques

Authors: Komal Babbar

Abstract:

Multiuser Detection is the intelligent estimation/demodulation of transmitted bits in the presence of Multiple Access Interference. The authors have presented the Bit-error rate (BER) achieved by linear multi-user detectors: Matched filter (which treats the MAI as AWGN), Decorrelating and MMSE. In this work, authors investigate the bit error probability analysis for Matched filter, decorrelating, and MMSE. This problem arises in several practical CDMA applications where the receiver may not have full knowledge of the number of active users and their signature sequences. In particular, the behavior of MAI at the output of the Multi-user detectors (MUD) is examined under various asymptotic conditions including large signal to noise ratio; large near-far ratios; and a large number of users. In the last section Authors also shows Matlab Simulation results for Multiuser detection techniques i.e., Matched filter, Decorrelating, MMSE for 2 users and 10 users.

Keywords: code division multiple access, decorrelating, matched filter, minimum mean square detection (MMSE) detection, multiple access interference (MAI), multiuser detection (MUD)

Procedia PDF Downloads 528
2893 Mathematics Anxiety among Male and Female Students

Authors: Wern Lin Yeo, Choo Kim Tan, Sook Ling Lew

Abstract:

Mathematics anxiety refers to the feeling of anxious when one having difficulties in solving mathematical problem. Mathematics anxiety is the most common type of anxiety among other types of anxiety which occurs among the students. However, level of anxiety among males and females are different. There were few past study were conducted to determine the relationship of anxiety and gender but there were still did not have an exact results. Hence, the purpose of this study is to determine the relationship of anxiety level between male and female undergraduates at a private university in Malaysia. Convenient sampling method used in this study in which the students were selected based on the grouping assigned by the faculty. There were 214 undergraduates who registered the probability courses had participated in this study. Mathematics Anxiety Rating Scale (MARS) was the instrument used in study which used to determine students’ anxiety level towards probability. Reliability and validity of instrument was done before the major study was conducted. In the major study, students were given briefing about the study conducted. Participation of this study were voluntary. Students were given consent form to determine whether they agree to participate in the study. Duration of two weeks were given for students to complete the given online questionnaire. The data collected will be analyzed using Statistical Package for the Social Sciences (SPSS) to determine the level of anxiety. There were three anxiety level, i.e., low, average and high. Students’ anxiety level were determined based on their scores obtained compared with the mean and standard deviation. If the scores obtained were below mean and standard deviation, the anxiety level was low. If the scores were at below and above the mean and between one standard deviation, the anxiety level was average. If the scores were above the mean and greater than one standard deviation, the anxiety level was high. Results showed that both of the gender were having average anxiety level. Males having high frequency of three anxiety level which were low, average and high anxiety level as compared to females. Hence, the mean values obtained for males (M = 3.62) was higher than females (M = 3.42). In order to be significant of anxiety level among the gender, the p-value should be less than .05. The p-value obtained in this study was .117. However, this value was greater than .05. Thus, there was no significant difference of anxiety level among the gender. In other words, there was no relationship of anxiety level with the gender.

Keywords: anxiety level, gender, mathematics anxiety, probability and statistics

Procedia PDF Downloads 291
2892 A Recognition Method of Ancient Yi Script Based on Deep Learning

Authors: Shanxiong Chen, Xu Han, Xiaolong Wang, Hui Ma

Abstract:

Yi is an ethnic group mainly living in mainland China, with its own spoken and written language systems, after development of thousands of years. Ancient Yi is one of the six ancient languages in the world, which keeps a record of the history of the Yi people and offers documents valuable for research into human civilization. Recognition of the characters in ancient Yi helps to transform the documents into an electronic form, making their storage and spreading convenient. Due to historical and regional limitations, research on recognition of ancient characters is still inadequate. Thus, deep learning technology was applied to the recognition of such characters. Five models were developed on the basis of the four-layer convolutional neural network (CNN). Alpha-Beta divergence was taken as a penalty term to re-encode output neurons of the five models. Two fully connected layers fulfilled the compression of the features. Finally, at the softmax layer, the orthographic features of ancient Yi characters were re-evaluated, their probability distributions were obtained, and characters with features of the highest probability were recognized. Tests conducted show that the method has achieved higher precision compared with the traditional CNN model for handwriting recognition of the ancient Yi.

Keywords: recognition, CNN, Yi character, divergence

Procedia PDF Downloads 165
2891 Predictive Modelling of Curcuminoid Bioaccessibility as a Function of Food Formulation and Associated Properties

Authors: Kevin De Castro Cogle, Mirian Kubo, Maria Anastasiadi, Fady Mohareb, Claire Rossi

Abstract:

Background: The bioaccessibility of bioactive compounds is a critical determinant of the nutritional quality of various food products. Despite its importance, there is a limited number of comprehensive studies aimed at assessing how the composition of a food matrix influences the bioaccessibility of a compound of interest. This knowledge gap has prompted a growing need to investigate the intricate relationship between food matrix formulations and the bioaccessibility of bioactive compounds. One such class of bioactive compounds that has attracted considerable attention is curcuminoids. These naturally occurring phytochemicals, extracted from the roots of Curcuma longa, have gained popularity owing to their purported health benefits and also well known for their poor bioaccessibility Project aim: The primary objective of this research project is to systematically assess the influence of matrix composition on the bioaccessibility of curcuminoids. Additionally, this study aimed to develop a series of predictive models for bioaccessibility, providing valuable insights for optimising the formula for functional foods and provide more descriptive nutritional information to potential consumers. Methods: Food formulations enriched with curcuminoids were subjected to in vitro digestion simulation, and their bioaccessibility was characterized with chromatographic and spectrophotometric techniques. The resulting data served as the foundation for the development of predictive models capable of estimating bioaccessibility based on specific physicochemical properties of the food matrices. Results: One striking finding of this study was the strong correlation observed between the concentration of macronutrients within the food formulations and the bioaccessibility of curcuminoids. In fact, macronutrient content emerged as a very informative explanatory variable of bioaccessibility and was used, alongside other variables, as predictors in a Bayesian hierarchical model that predicted curcuminoid bioaccessibility accurately (optimisation performance of 0.97 R2) for the majority of cross-validated test formulations (LOOCV of 0.92 R2). These preliminary results open the door to further exploration, enabling researchers to investigate a broader spectrum of food matrix types and additional properties that may influence bioaccessibility. Conclusions: This research sheds light on the intricate interplay between food matrix composition and the bioaccessibility of curcuminoids. This study lays a foundation for future investigations, offering a promising avenue for advancing our understanding of bioactive compound bioaccessibility and its implications for the food industry and informed consumer choices.

Keywords: bioactive bioaccessibility, food formulation, food matrix, machine learning, probabilistic modelling

Procedia PDF Downloads 68
2890 Crystalline Particles Dispersed Cu-Based Metallic Glassy Composites Fabricated by Spark Plasma Sintering

Authors: Sandrine Cardinal, Jean-Marc Pelletier, Guang Xie, Florian Mercier, Florent Delmas

Abstract:

Bulk metallic glasses exhibit several superior properties, compared to their corresponding crystalline counterpart, such as high strength, high elastic limit or good corrosion resistance. Therefore they can be considered as good candidates for structural applications in many sectors. However, they are generally brittle and do not exhibit plastic deformation at room temperature. These materials are mainly obtained by rapid cooling from a liquid state to prevent crystallization, which limits their size. To overcome these two drawbacks: fragility and limited dimensions, composite metallic glass matrix reinforced by a second phase whose role is to slow crack growth are developed. Concerning the limited size of the pieces, the proposed solution is to get the material from amorphous powders by densifying under load. In this study, Cu50Zr45Al5 bulk metallic glassy matrix composites (MGMCs) containing different volume fraction (Vf) of Zr crystalline particles were manufactured by spark plasma sintering (SPS). Microstructure, thermal stability and mechanical properties of the MGMCs were investigated. Matrix of the composites remains a fully amorphous phase after consolidation at 420°C under 600 MPa. A good dispersion of the particles in the glassy matrix is obtained. Results show that the compressive strength decreases with Vf : 1670 MPa (Vf=0%) to 1300MPa (Vf=30%), the elastic modulus decreases but only slighty respectively 97.3GPa and 94.5 GPa and plasticity is improved from 0 to 4%. Fractographic investigation indicates a good bonding between amorphous and crystalline particles. In conclusion, present study has demonstrated that SPS method is useful for the synthesis of the bulk glassy composites. Large controlled microstructure specimens with interesting ductility can be obtained compared with others methods.

Keywords: composite, mechanical properties, metallic glasses, spark plasma sintering

Procedia PDF Downloads 281
2889 Neuron Imaging in Lateral Geniculate Nucleus

Authors: Sandy Bao, Yankang Bao

Abstract:

The understanding of information that is being processed in the brain, especially in the lateral geniculate nucleus (LGN), has been proven challenging for modern neuroscience and for researchers with a focus on how neurons process signals and images. In this paper, we are proposing a method to image process different colors within different layers of LGN, that is, green information in layers 4 & 6 and red & blue in layers 3 & 5 based on the surface dimension of layers. We take into consideration the images in LGN and visual cortex, and that the edge detected information from the visual cortex needs to be considered in order to return back to the layers of LGN, along with the image in LGN to form the new image, which will provide an improved image that is clearer, sharper, and making it easier to identify objects in the image. Matrix Laboratory (MATLAB) simulation is performed, and results show that the clarity of the output image has significant improvement.

Keywords: lateral geniculate nucleus, matrix laboratory, neuroscience, visual cortex

Procedia PDF Downloads 280
2888 Optimized Dynamic Bayesian Networks and Neural Verifier Test Applied to On-Line Isolated Characters Recognition

Authors: Redouane Tlemsani, Redouane, Belkacem Kouninef, Abdelkader Benyettou

Abstract:

In this paper, our system is a Markovien system which we can see it like a Dynamic Bayesian Networks. One of the major interests of these systems resides in the complete training of the models (topology and parameters) starting from training data. The Bayesian Networks are representing models of dubious knowledge on complex phenomena. They are a union between the theory of probability and the graph theory in order to give effective tools to represent a joined probability distribution on a set of random variables. The representation of knowledge bases on description, by graphs, relations of causality existing between the variables defining the field of study. The theory of Dynamic Bayesian Networks is a generalization of the Bayesians networks to the dynamic processes. Our objective amounts finding the better structure which represents the relationships (dependencies) between the variables of a dynamic bayesian network. In applications in pattern recognition, one will carry out the fixing of the structure which obliges us to admit some strong assumptions (for example independence between some variables).

Keywords: Arabic on line character recognition, dynamic Bayesian network, pattern recognition, networks

Procedia PDF Downloads 619
2887 An Analysis of a Queueing System with Heterogeneous Servers Subject to Catastrophes

Authors: M. Reni Sagayaraj, S. Anand Gnana Selvam, R. Reynald Susainathan

Abstract:

This study analyzed a queueing system with blocking and no waiting line. The customers arrive according to a Poisson process and the service times follow exponential distribution. There are two non-identical servers in the system. The queue discipline is FCFS, and the customers select the servers on fastest server first (FSF) basis. The service times are exponentially distributed with parameters μ1 and μ2 at servers I and II, respectively. Besides, the catastrophes occur in a Poisson manner with rate γ in the system. When server I is busy or blocked, the customer who arrives in the system leaves the system without being served. Such customers are called lost customers. The probability of losing a customer was computed for the system. The explicit time dependent probabilities of system size are obtained and a numerical example is presented in order to show the managerial insights of the model. Finally, the probability that arriving customer finds system busy and average number of server busy in steady state are obtained numerically.

Keywords: queueing system, blocking, poisson process, heterogeneous servers, queue discipline FCFS, busy period

Procedia PDF Downloads 507
2886 An Experimental Investigation of the Cognitive Noise Influence on the Bistable Visual Perception

Authors: Alexander E. Hramov, Vadim V. Grubov, Alexey A. Koronovskii, Maria K. Kurovskaуa, Anastasija E. Runnova

Abstract:

The perception of visual signals in the brain was among the first issues discussed in terms of multistability which has been introduced to provide mechanisms for information processing in biological neural systems. In this work the influence of the cognitive noise on the visual perception of multistable pictures has been investigated. The study includes an experiment with the bistable Necker cube illusion and the theoretical background explaining the obtained experimental results. In our experiments Necker cubes with different wireframe contrast were demonstrated repeatedly to different people and the probability of the choice of one of the cubes projection was calculated for each picture. The Necker cube was placed at the middle of a computer screen as black lines on a white background. The contrast of the three middle lines centered in the left middle corner was used as one of the control parameter. Between two successive demonstrations of Necker cubes another picture was shown to distract attention and to make a perception of next Necker cube more independent from the previous one. Eleven subjects, male and female, of the ages 20 through 45 were studied. The choice of the Necker cube projection was detected with the Electroencephalograph-recorder Encephalan-EEGR-19/26, Medicom MTD. To treat the experimental results we carried out theoretical consideration using the simplest double-well potential model with the presence of noise that led to the Fokker-Planck equation for the probability density of the stochastic process. At the first time an analytical solution for the probability of the selection of one of the Necker cube projection for different values of wireframe contrast have been obtained. Furthermore, having used the results of the experimental measurements with the help of the method of least squares we have calculated the value of the parameter corresponding to the cognitive noise of the person being studied. The range of cognitive noise parameter values for studied subjects turned to be [0.08; 0.55]. It should be noted, that experimental results have a good reproducibility, the same person being studied repeatedly another day produces very similar data with very close levels of cognitive noise. We found an excellent agreement between analytically deduced probability and the results obtained in the experiment. A good qualitative agreement between theoretical and experimental results indicates that even such a simple model allows simulating brain cognitive dynamics and estimating important cognitive characteristic of the brain, such as brain noise.

Keywords: bistability, brain, noise, perception, stochastic processes

Procedia PDF Downloads 445
2885 Finite Element Modelling of a 3D Woven Composite for Automotive Applications

Authors: Ahmad R. Zamani, Luigi Sanguigno, Angelo R. Maligno

Abstract:

A 3D woven composite, designed for automotive applications, is studied using Abaqus Finite Element (FE) software suite. Python scripts were developed to build FE models of the woven composite in Complete Abaqus Environment (CAE). They can read TexGen or WiseTex files and automatically generate consistent meshes of the fabric and the matrix. A user menu is provided to help define parameters for the FE models, such as type and size of the elements in fabric and matrix as well as the type of matrix-fabric interaction. Node-to-node constraints were imposed to guarantee periodicity of the deformed shapes at the boundaries of the representative volume element of the composite. Tensile loads in three axes and biaxial loads in x-y directions have been applied at different Fibre Volume Fractions (FVFs). A simple damage model was implemented via an Abaqus user material (UMAT) subroutine. Existing tools for homogenization were also used, including voxel mesh generation from TexGen as well as Abaqus Micromechanics plugin. Linear relations between homogenised elastic properties and the FVFs are given. The FE models of composite exhibited balanced behaviour with respect to warp and weft directions in terms of both stiffness and strength.

Keywords: 3D woven composite (3DWC), meso-scale finite element model, homogenisation of elastic material properties, Abaqus Python scripting

Procedia PDF Downloads 146
2884 Image Rotation Using an Augmented 2-Step Shear Transform

Authors: Hee-Choul Kwon, Heeyong Kwon

Abstract:

Image rotation is one of main pre-processing steps for image processing or image pattern recognition. It is implemented with a rotation matrix multiplication. It requires a lot of floating point arithmetic operations and trigonometric calculations, so it takes a long time to execute. Therefore, there has been a need for a high speed image rotation algorithm without two major time-consuming operations. However, the rotated image has a drawback, i.e. distortions. We solved the problem using an augmented two-step shear transform. We compare the presented algorithm with the conventional rotation with images of various sizes. Experimental results show that the presented algorithm is superior to the conventional rotation one.

Keywords: high-speed rotation operation, image rotation, transform matrix, image processing, pattern recognition

Procedia PDF Downloads 278
2883 Sampled-Data Control for Fuel Cell Systems

Authors: H. Y. Jung, Ju H. Park, S. M. Lee

Abstract:

A sampled-data controller is presented for solid oxide fuel cell systems which is expressed by a sector bounded nonlinear model. The sector bounded nonlinear systems, which have a feedback connection with a linear dynamical system and nonlinearity satisfying certain sector type constraints. Also, the sampled-data control scheme is very useful since it is possible to handle digital controller and increasing research efforts have been devoted to sampled-data control systems with the development of modern high-speed computers. The proposed control law is obtained by solving a convex problem satisfying several linear matrix inequalities. Simulation results are given to show the effectiveness of the proposed design method.

Keywords: sampled-data control, fuel cell, linear matrix inequalities, nonlinear control

Procedia PDF Downloads 566
2882 Employers’ Preferences when Employing Solo Self-employed: a Vignette Study in the Netherlands

Authors: Lian Kösters, Wendy Smits, Raymond Montizaan

Abstract:

The number of solo self-employed in the Netherlands has been increasing for years. The relative increase is among the largest in the EU. To explain this increase, most studies have focused on the supply side, workers who offer themselves as solo self-employed. The number of studies that focus on the demand side, the employer who hires the solo self-employed, is still scarce. Studies into employer behaviour conducted until now show that employers mainly choose self-employed workers when they have a temporary need for specialist knowledge, but also during projects or production peaks. These studies do not provide insight into the employers’ considerations for different contract types. In this study, interviews with employers were conducted, and available literature was consulted to provide an overview of the several factors employers use to compare different contract types. That input was used to set up a vignette study. This was carried out at the end of 2021 among almost 1000 business owners, HR managers, and business leaders of Dutch companies. Each respondent was given two sets of five fictitious candidates for two possible positions in their organization. They were asked to rank these candidates. The positions varied with regard to the type of tasks (core tasks or support tasks) and the time it took to train new people for the position. The respondents were asked additional questions about the positions, such as the required level of education, the duration, and the degree of predictability of tasks. The fictitious candidates varied, among other things, in the type of contract on which they would come to work for the organization. The results were analyzed using a rank-ordered logit analysis. This vignette setup makes it possible to see which factors are most important for employers when choosing to hire a solo self-employed person compared to other contracts. The results show that there are no indications that employers would want to hire solo self-employed workers en masse. They prefer regular employee contracts. The probability of being chosen with a solo self-employed contract over someone who comes to work as a temporary employee is 32 percent. This probability is even lower than for on-call and temporary agency workers. For a permanent contract, this probability is 46 percent. The results provide indications that employers consider knowledge and skills more important than the solo self-employed contract and that this can compensate. A solo self-employed candidate with 10 years of work experience has a 63 percent probability of being found attractive by an employer compared to a temporary employee without work experience. This suggests that employers are willing to give someone a less attractive contract for the employer if the worker so wishes. The results also show that the probability that a solo self-employed person is preferred over a candidate with a temporary employee contract is somewhat higher in business economics, administrative and technical professions. No significant results were found for factors where it was expected that solo self-employed workers are preferred more often, such as for unpredictable or temporary work.

Keywords: employer behaviour, rank-ordered logit analysis, solo self-employment, temporary contract, vignette study

Procedia PDF Downloads 73
2881 Wireless Transmission of Big Data Using Novel Secure Algorithm

Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha

Abstract:

This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.

Keywords: big data, two-hop transmission, physical layer wireless security, cooperative jamming, energy balance

Procedia PDF Downloads 491
2880 A Case Study for User Rating Prediction on Automobile Recommendation System Using Mapreduce

Authors: Jiao Sun, Li Pan, Shijun Liu

Abstract:

Recommender systems have been widely used in contemporary industry, and plenty of work has been done in this field to help users to identify items of interest. Collaborative Filtering (CF, for short) algorithm is an important technology in recommender systems. However, less work has been done in automobile recommendation system with the sharp increase of the amount of automobiles. What’s more, the computational speed is a major weakness for collaborative filtering technology. Therefore, using MapReduce framework to optimize the CF algorithm is a vital solution to this performance problem. In this paper, we present a recommendation of the users’ comment on industrial automobiles with various properties based on real world industrial datasets of user-automobile comment data collection, and provide recommendation for automobile providers and help them predict users’ comment on automobiles with new-coming property. Firstly, we solve the sparseness of matrix using previous construction of score matrix. Secondly, we solve the data normalization problem by removing dimensional effects from the raw data of automobiles, where different dimensions of automobile properties bring great error to the calculation of CF. Finally, we use the MapReduce framework to optimize the CF algorithm, and the computational speed has been improved times. UV decomposition used in this paper is an often used matrix factorization technology in CF algorithm, without calculating the interpolation weight of neighbors, which will be more convenient in industry.

Keywords: collaborative filtering, recommendation, data normalization, mapreduce

Procedia PDF Downloads 217