Search results for: positive definite kernels
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6723

Search results for: positive definite kernels

6723 Physics of the Riemann Zeros: The Low Bound for the Zeta Derivative via Quantum Field Theory

Authors: Andrey Egorov

Abstract:

A product of the specific Lagrangian and the entropy factor is defined. Its positive definiteness is stated for the proper coupling constant. The passage from statistical mechanics to quantum field theory is performed by Wick rotation. The Green function (a convolution of the spectral amplitude and the propagator) is positive. Masses of quasiparticles are computed as residues. The role of the zeta derivative at zeta zeros is then highlighted, and the correspondent low bound is obtained.

Keywords: mass gap, positive definite kernels, quantum fields, Riemann zeta zeros

Procedia PDF Downloads 16
6722 On the Fourth-Order Hybrid Beta Polynomial Kernels in Kernel Density Estimation

Authors: Benson Ade Eniola Afere

Abstract:

This paper introduces a family of fourth-order hybrid beta polynomial kernels developed for statistical analysis. The assessment of these kernels' performance centers on two critical metrics: asymptotic mean integrated squared error (AMISE) and kernel efficiency. Through the utilization of both simulated and real-world datasets, a comprehensive evaluation was conducted, facilitating a thorough comparison with conventional fourth-order polynomial kernels. The evaluation procedure encompassed the computation of AMISE and efficiency values for both the proposed hybrid kernels and the established classical kernels. The consistently observed trend was the superior performance of the hybrid kernels when compared to their classical counterparts. This trend persisted across diverse datasets, underscoring the resilience and efficacy of the hybrid approach. By leveraging these performance metrics and conducting evaluations on both simulated and real-world data, this study furnishes compelling evidence in favour of the superiority of the proposed hybrid beta polynomial kernels. The discernible enhancement in performance, as indicated by lower AMISE values and higher efficiency scores, strongly suggests that the proposed kernels offer heightened suitability for statistical analysis tasks when compared to traditional kernels.

Keywords: AMISE, efficiency, fourth-order Kernels, hybrid Kernels, Kernel density estimation

Procedia PDF Downloads 40
6721 On the Cluster of the Families of Hybrid Polynomial Kernels in Kernel Density Estimation

Authors: Benson Ade Eniola Afere

Abstract:

Over the years, kernel density estimation has been extensively studied within the context of nonparametric density estimation. The fundamental components of kernel density estimation are the kernel function and the bandwidth. While the mathematical exploration of the kernel component has been relatively limited, its selection and development remain crucial. The Mean Integrated Squared Error (MISE), serving as a measure of discrepancy, provides a robust framework for assessing the effectiveness of any kernel function. A kernel function with a lower MISE is generally considered to perform better than one with a higher MISE. Hence, the primary aim of this article is to create kernels that exhibit significantly reduced MISE when compared to existing classical kernels. Consequently, this article introduces a cluster of hybrid polynomial kernel families. The construction of these proposed kernel functions is carried out heuristically by combining two kernels from the classical polynomial kernel family using probability axioms. We delve into the analysis of error propagation within these kernels. To assess their performance, simulation experiments, and real-life datasets are employed. The obtained results demonstrate that the proposed hybrid kernels surpass their classical kernel counterparts in terms of performance.

Keywords: classical polynomial kernels, cluster of families, global error, hybrid Kernels, Kernel density estimation, Monte Carlo simulation

Procedia PDF Downloads 61
6720 On the Algorithmic Iterative Solutions of Conjugate Gradient, Gauss-Seidel and Jacobi Methods for Solving Systems of Linear Equations

Authors: Hussaini Doko Ibrahim, Hamilton Cyprian Chinwenyi, Henrietta Nkem Ude

Abstract:

In this paper, efforts were made to examine and compare the algorithmic iterative solutions of the conjugate gradient method as against other methods such as Gauss-Seidel and Jacobi approaches for solving systems of linear equations of the form Ax=b, where A is a real n×n symmetric and positive definite matrix. We performed algorithmic iterative steps and obtained analytical solutions of a typical 3×3 symmetric and positive definite matrix using the three methods described in this paper (Gauss-Seidel, Jacobi, and conjugate gradient methods), respectively. From the results obtained, we discovered that the conjugate gradient method converges faster to exact solutions in fewer iterative steps than the two other methods, which took many iterations, much time, and kept tending to the exact solutions.

Keywords: conjugate gradient, linear equations, symmetric and positive definite matrix, gauss-seidel, Jacobi, algorithm

Procedia PDF Downloads 117
6719 Image Compression Based on Regression SVM and Biorthogonal Wavelets

Authors: Zikiou Nadia, Lahdir Mourad, Ameur Soltane

Abstract:

In this paper, we propose an effective method for image compression based on SVM Regression (SVR), with three different kernels, and biorthogonal 2D Discrete Wavelet Transform. SVM regression could learn dependency from training data and compressed using fewer training points (support vectors) to represent the original data and eliminate the redundancy. Biorthogonal wavelet has been used to transform the image and the coefficients acquired are then trained with different kernels SVM (Gaussian, Polynomial, and Linear). Run-length and Arithmetic coders are used to encode the support vectors and its corresponding weights, obtained from the SVM regression. The peak signal noise ratio (PSNR) and their compression ratios of several test images, compressed with our algorithm, with different kernels are presented. Compared with other kernels, Gaussian kernel achieves better image quality. Experimental results show that the compression performance of our method gains much improvement.

Keywords: image compression, 2D discrete wavelet transform (DWT-2D), support vector regression (SVR), SVM Kernels, run-length, arithmetic coding

Procedia PDF Downloads 350
6718 Sharp Estimates of Oscillatory Singular Integrals with Rough Kernels

Authors: H. Al-Qassem, L. Cheng, Y. Pan

Abstract:

In this paper, we establish sharp bounds for oscillatory singular integrals with an arbitrary real polynomial phase P. Our kernels are allowed to be rough both on the unit sphere and in the radial direction. We show that the bounds grow no faster than log (deg(P)), which is optimal and was first obtained by Parissis and Papadimitrakis for kernels without any radial roughness. Our results substantially improve many previously known results. Among key ingredients of our methods are an L¹→L² sharp estimate and using extrapolation.

Keywords: oscillatory singular integral, rough kernel, singular integral, orlicz spaces, block spaces, extrapolation, L^{p} boundedness

Procedia PDF Downloads 432
6717 Definite Article Errors and Effect of L1 Transfer

Authors: Bimrisha Mali

Abstract:

The present study investigates the type of errors English as a second language (ESL) learners produce using the definite article ‘the’. The participants were provided a questionnaire on the learner's ability test. The questionnaire consists of three cloze tests and two free composition tests. Each participant's response was received in the form of written data. A total of 78 participants from three government schools participated in the study. The participants are high-school students from Rural Assam. Assam is a north-eastern state of India. Their age ranged between 14-15. The medium of instruction and the communication among the students take place in the local language, i.e., Assamese. Pit Corder’s steps for conducting error analysis have been followed for the analysis procedure. Four types of errors were found (1) deletion of the definite article, (2) use of the definite article as modifiers as adjectives, (3) incorrect use of the definite article with singular proper nouns, (4) substitution of the definite article by the indefinite article ‘a’. Classifiers in Assamese that express definiteness is used with nouns, adjectives, and numerals. It is found that native language (L1) transfer plays a pivotal role in the learners’ errors. The analysis reveals the learners' inability to acquire the semantic connotation of definiteness in English due to native language (L1) interference.

Keywords: definite article error, l1 transfer, error analysis, ESL

Procedia PDF Downloads 101
6716 Analysis of Aspergillus fumigatus IgG Serologic Cut-Off Values to Increase Diagnostic Specificity of Allergic Bronchopulmonary Aspergillosis

Authors: Sushmita Roy Chowdhury, Steve Holding, Sujoy Khan

Abstract:

The immunogenic responses of the lung towards the fungus Aspergillus fumigatus may range from invasive aspergillosis in the immunocompromised, fungal ball or infection within a cavity in the lung in those with structural lung lesions, or allergic bronchopulmonary aspergillosis (ABPA). Patients with asthma or cystic fibrosis are particularly predisposed to ABPA. There are consensus guidelines that have established criteria for diagnosis of ABPA, but uncertainty remains on the serologic cut-off values that would increase the diagnostic specificity of ABPA. We retrospectively analyzed 80 patients with severe asthma and evidence of peripheral blood eosinophilia ( > 500) over the last 3 years who underwent all serologic tests to exclude ABPA. Total IgE, specific IgE and specific IgG levels against Aspergillus fumigatus were measured using ImmunoCAP Phadia-100 (Thermo Fisher Scientific, Sweden). The Modified ISHAM working group 2013 criteria (obligate criteria: asthma or cystic fibrosis, total IgE > 1000 IU/ml or > 417 kU/L and positive specific IgE Aspergillus fumigatus or skin test positivity; with ≥ 2 of peripheral eosinophilia, positive specific IgG Aspergillus fumigatus and consistent radiographic opacities) was used in the clinical workup for the final diagnosis of ABPA. Patients were divided into 3 groups - definite, possible, and no evidence of ABPA. Specific IgG Aspergillus fumigatus levels were not used to assign the patients into any of the groups. Of 80 patients (males 48, females 32; mean age 53.9 years ± SD 15.8) selected for the analysis, there were 30 patients who had positive specific IgE against Aspergillus fumigatus (37.5%). 13 patients fulfilled the Modified ISHAM working group 2013 criteria of ABPA (‘definite’), while 15 patients were ‘possible’ ABPA and 52 did not fulfill the criteria (not ABPA). As IgE levels were not normally distributed, median levels were used in the analysis. Median total IgE levels of patients with definite and possible ABPA were 2144 kU/L and 2597 kU/L respectively (non-significant), while median specific IgE Aspergillus fumigatus at 4.35 kUA/L and 1.47 kUA/L respectively were significantly different (comparison of standard deviations F-statistic 3.2267, significance level p=0.040). Mean levels of IgG anti-Aspergillus fumigatus in the three groups (definite, possible and no evidence of ABPA) were compared using ANOVA (Statgraphics Centurion Professional XV, Statpoint Inc). Mean levels of IgG anti-Aspergillus fumigatus (Gm3) in definite ABPA was 125.17 mgA/L ( ± SD 54.84, with 95%CI 92.03-158.32), while mean Gm3 levels in possible and no ABPA were 18.61 mgA/L and 30.05 mgA/L respectively. ANOVA showed a significant difference between the definite group and the other groups (p < 0.001). This was confirmed using multiple range tests (Fisher's least significant difference procedure). There was no significant difference between the possible ABPA and not ABPA groups (p > 0.05). The study showed that a sizeable proportion of patients with asthma are sensitized to Aspergillus fumigatus in this part of India. A higher cut-off value of Gm3 ≥ 80 mgA/L provides a higher serologic specificity towards definite ABPA. Long-term studies would provide us more information if those patients with 'possible' APBA and positive Gm3 later develop clear ABPA, and are different from the Gm3 negative group in this respect. Serologic testing with clear defined cut-offs are a valuable adjunct in the diagnosis of ABPA.

Keywords: allergic bronchopulmonary aspergillosis, Aspergillus fumigatus, asthma, IgE level

Procedia PDF Downloads 174
6715 Calculate Consumer Surplus and Producer Surplus Using Integration

Authors: Bojan Radisic, Katarina Stavlic

Abstract:

The paper describes two economics terms consumer surplus and producer surplus using the definite integrals (the Riemann integral). The consumer surplus is the difference between what consumers are willing to pay and actual price. The producer surplus is the difference between what producers selling at the current price, rather than at the price they would have been are willing to accept. Using the definite integrals describe terms and mathematical formulas of the consumer surplus and the producer surplus and will be applied to the numerical examples.

Keywords: consumer surplus, producer surplus, definite integral, integration

Procedia PDF Downloads 536
6714 Extraction and Antibacterial Studies of Oil from Three Mango Kernel Obtained from Makurdi, Nigeria

Authors: K. Asemave, D. O. Abakpa, T. T. Ligom

Abstract:

The ability of bacteria to develop resistance to many antibiotics cannot be undermined, given the multifaceted health challenges in the present times. For this reason, a lot of attention is on botanicals and their products in search of new antibacterial agents. On the other hand, mango kernel oils (MKO) can be heavily valorized by taking advantage of the myriads bioactive phytochemicals it contains. Herein, we validated the use of MKO as bioactive agent against bacteria. The MKOs for the study were extracted by soxhlet means with ethanol and hexane for 4 h from 3 different mango kernels, namely; 'local' (sample A), 'julie' (sample B), and 'john' (sample C). Prior to the extraction, ground fine particles of the kernels were obtained from the seed kernels dried in oven at 100 °C for 8 h. Hexane gave higher yield of the oils than ethanol. It was also qualitatively confirmed that the mango kernel oils contain some phytochemicals such as phenol, quinone, saponin, and terpenoid. The results of the antibacterial activities of the MKO against both gram positive (Staphylococcus aureus) and gram negative (Pseudomonas aeruginosa) at different concentrations showed that the oils extracted with ethanol gave better antibacterial properties than those of the hexane. More so, the bioactivities were best with the local mango kernel oil. Indeed this work has completely validated the previous claim that MKOs are effective antibacterial agents. Thus, these oils (especially the ethanol-derived ones) can be used as bacteriostatic and antibacterial agents in say food, cosmetics, and allied industries.

Keywords: bacteria, mango, kernel, oil, phytochemicals

Procedia PDF Downloads 122
6713 Sorting Maize Haploids from Hybrids Using Single-Kernel Near-Infrared Spectroscopy

Authors: Paul R Armstrong

Abstract:

Doubled haploids (DHs) have become an important breeding tool for creating maize inbred lines, although several bottlenecks in the DH production process limit wider development, application, and adoption of the technique. DH kernels are typically sorted manually and represent about 10% of the seeds in a much larger pool where the remaining 90% are hybrid siblings. This introduces time constraints on DH production and manual sorting is often not accurate. Automated sorting based on the chemical composition of the kernel can be effective, but devices, namely NMR, have not achieved the sorting speed to be a cost-effective replacement to manual sorting. This study evaluated a single kernel near-infrared reflectance spectroscopy (skNIR) platform to accurately identify DH kernels based on oil content. The skNIR platform is a higher-throughput device, approximately 3 seeds/s, that uses spectra to predict oil content of each kernel from maize crosses intentionally developed to create larger than normal oil differences, 1.5%-2%, between DH and hybrid kernels. Spectra from the skNIR were used to construct a partial least squares regression (PLS) model for oil and for a categorical reference model of 1 (DH kernel) or 2 (hybrid kernel) and then used to sort several crosses to evaluate performance. Two approaches were used for sorting. The first used a general PLS model developed from all crosses to predict oil content and then used for sorting each induction cross, the second was the development of a specific model from a single induction cross where approximately fifty DH and one hundred hybrid kernels used. This second approach used a categorical reference value of 1 and 2, instead of oil content, for the PLS model and kernels selected for the calibration set were manually referenced based on traditional commercial methods using coloration of the tip cap and germ areas. The generalized PLS oil model statistics were R2 = 0.94 and RMSE = .93% for kernels spanning an oil content of 2.7% to 19.3%. Sorting by this model resulted in extracting 55% to 85% of haploid kernels from the four induction crosses. Using the second method of generating a model for each cross yielded model statistics ranging from R2s = 0.96 to 0.98 and RMSEs from 0.08 to 0.10. Sorting in this case resulted in 100% correct classification but required models that were cross. In summary, the first generalized model oil method could be used to sort a significant number of kernels from a kernel pool but was not close to the accuracy of developing a sorting model from a single cross. The penalty for the second method is that a PLS model would need to be developed for each individual cross. In conclusion both methods could find useful application in the sorting of DH from hybrid kernels.

Keywords: NIR, haploids, maize, sorting

Procedia PDF Downloads 276
6712 Rough Oscillatory Singular Integrals on Rⁿ

Authors: H. M. Al-Qassem, L. Cheng, Y. Pan

Abstract:

In this paper we establish sharp bounds for oscillatory singular integrals with an arbitrary real polynomial phase P. Our kernels are allowed to be rough both on the unit sphere and in the radial direction. We show that the bounds grow no faster than log(deg(P)), which is optimal and was first obtained by Parissis and Papadimitrakis for kernels without any radial roughness. Among key ingredients of our methods are an L¹→L² estimate and extrapolation.

Keywords: oscillatory singular integral, rough kernel, singular integral, Orlicz spaces, Block spaces, extrapolation, L^{p} boundedness

Procedia PDF Downloads 326
6711 On the Mathematical Modelling of Aggregative Stability of Disperse Systems

Authors: Arnold M. Brener, Lesbek Tashimov, Ablakim S. Muratov

Abstract:

The paper deals with the special model for coagulation kernels which represents new control parameters in the Smoluchowski equation for binary aggregation. On the base of the model the new approach to evaluating aggregative stability of disperse systems has been submitted. With the help of this approach the simple estimates for aggregative stability of various types of hydrophilic nano-suspensions have been obtained.

Keywords: aggregative stability, coagulation kernels, disperse systems, mathematical model

Procedia PDF Downloads 285
6710 The Linear Combination of Kernels in the Estimation of the Cumulative Distribution Functions

Authors: Abdel-Razzaq Mugdadi, Ruqayyah Sani

Abstract:

The Kernel Distribution Function Estimator (KDFE) method is the most popular method for nonparametric estimation of the cumulative distribution function. The kernel and the bandwidth are the most important components of this estimator. In this investigation, we replace the kernel in the KDFE with a linear combination of kernels to obtain a new estimator based on the linear combination of kernels, the mean integrated squared error (MISE), asymptotic mean integrated squared error (AMISE) and the asymptotically optimal bandwidth for the new estimator are derived. We propose a new data-based method to select the bandwidth for the new estimator. The new technique is based on the Plug-in technique in density estimation. We evaluate the new estimator and the new technique using simulations and real-life data.

Keywords: estimation, bandwidth, mean square error, cumulative distribution function

Procedia PDF Downloads 543
6709 Enhancing Predictive Accuracy in Pharmaceutical Sales through an Ensemble Kernel Gaussian Process Regression Approach

Authors: Shahin Mirshekari, Mohammadreza Moradi, Hossein Jafari, Mehdi Jafari, Mohammad Ensaf

Abstract:

This research employs Gaussian Process Regression (GPR) with an ensemble kernel, integrating Exponential Squared, Revised Matern, and Rational Quadratic kernels to analyze pharmaceutical sales data. Bayesian optimization was used to identify optimal kernel weights: 0.76 for Exponential Squared, 0.21 for Revised Matern, and 0.13 for Rational Quadratic. The ensemble kernel demonstrated superior performance in predictive accuracy, achieving an R² score near 1.0, and significantly lower values in MSE, MAE, and RMSE. These findings highlight the efficacy of ensemble kernels in GPR for predictive analytics in complex pharmaceutical sales datasets.

Keywords: Gaussian process regression, ensemble kernels, bayesian optimization, pharmaceutical sales analysis, time series forecasting, data analysis

Procedia PDF Downloads 26
6708 The Foundation Binary-Signals Mechanics and Actual-Information Model of Universe

Authors: Elsadig Naseraddeen Ahmed Mohamed

Abstract:

In contrast to the uncertainty and complementary principle, it will be shown in the present paper that the probability of the simultaneous occupation event of any definite values of coordinates by any definite values of momentum and energy at any definite instance of time can be described by a binary definite function equivalent to the difference between their numbers of occupation and evacuation epochs up to that time and also equivalent to the number of exchanges between those occupation and evacuation epochs up to that times modulus two, these binary definite quantities can be defined at all point in the time’s real-line so it form a binary signal represent a complete mechanical description of physical reality, the time of these exchanges represent the boundary of occupation and evacuation epochs from which we can calculate these binary signals using the fact that the time of universe events actually extends in the positive and negative of time’s real-line in one direction of extension when these number of exchanges increase, so there exists noninvertible transformation matrix can be defined as the matrix multiplication of invertible rotation matrix and noninvertible scaling matrix change the direction and magnitude of exchange event vector respectively, these noninvertible transformation will be called actual transformation in contrast to information transformations by which we can navigate the universe’s events transformed by actual transformations backward and forward in time’s real-line, so these information transformations will be derived as an elements of a group can be associated to their corresponded actual transformations. The actual and information model of the universe will be derived by assuming the existence of time instance zero before and at which there is no coordinate occupied by any definite values of momentum and energy, and then after that time, the universe begin its expanding in spacetime, this assumption makes the need for the existence of Laplace’s demon who at one moment can measure the positions and momentums of all constituent particle of the universe and then use the law of classical mechanics to predict all future and past of universe’s events, superfluous, we only need for the establishment of our analog to digital converters to sense the binary signals that determine the boundaries of occupation and evacuation epochs of the definite values of coordinates relative to its origin by the definite values of momentum and energy as present events of the universe from them we can predict approximately in high precision it's past and future events.

Keywords: binary-signal mechanics, actual-information model of the universe, actual-transformation, information-transformation, uncertainty principle, Laplace's demon

Procedia PDF Downloads 138
6707 Study of Biodegradable Composite Materials Based on Polylactic Acid and Vegetal Reinforcements

Authors: Manel Hannachi, Mustapha Nechiche, Said Azem

Abstract:

This study focuses on biodegradable materials made from Poly-lactic acid (PLA) and vegetal reinforcements. Three materials are developed from PLA, as a matrix, and : (i) olive kernels (OK); (ii) alfa (α) short fibers and (iii) OK+ α mixture, as reinforcements. After processing of PLA pellets and olive kernels in powder and alfa stems in short fibers, three mixtures, namely PLA-OK, PLA-α, and PLA-OK-α are prepared and homogenized in Turbula®. These mixtures are then compacted at 180°C under 10 MPa during 15 mn. Scanning Electron Microscopy (SEM) examinations show that PLA matrix adheres at surface of all reinforcements and the dispersion of these ones in matrix is good. X-ray diffraction (XRD) analyses highlight an increase of PLA inter-reticular distances, especially for the PLA-OK case. These results are explained by the dissociation of some molecules derived from reinforcements followed by diffusion of the released atoms in the structure of PLA. This is consistent with Fourier Transform Infrared Spectroscopy (FTIR) and Differential Scanning Calorimetry (DSC) analysis results.

Keywords: alfa short fibers, biodegradable composite, olive kernels, poly-lactic acid

Procedia PDF Downloads 124
6706 Solution of S3 Problem of Deformation Mechanics for a Definite Condition and Resulting Modifications of Important Failure Theories

Authors: Ranajay Bhowmick

Abstract:

Analysis of stresses for an infinitesimal tetrahedron leads to a situation where we obtain a cubic equation consisting of three stress invariants. This cubic equation, when solved for a definite condition, gives the principal stresses directly without requiring any cumbersome and time-consuming trial and error methods or iterative numerical procedures. Since the failure criterion of different materials are generally expressed as functions of principal stresses, an attempt has been made in this study to incorporate the solutions of the cubic equation in the form of principal stresses, obtained for a definite condition, into some of the established failure theories to determine their modified descriptions. It has been observed that the failure theories can be represented using the quadratic stress invariant and the orientation of the principal plane.

Keywords: cubic equation, stress invariant, trigonometric, explicit solution, principal stress, failure criterion

Procedia PDF Downloads 105
6705 Evolving Convolutional Filter Using Genetic Algorithm for Image Classification

Authors: Rujia Chen, Ajit Narayanan

Abstract:

Convolutional neural networks (CNN), as typically applied in deep learning, use layer-wise backpropagation (BP) to construct filters and kernels for feature extraction. Such filters are 2D or 3D groups of weights for constructing feature maps at subsequent layers of the CNN and are shared across the entire input. BP as a gradient descent algorithm has well-known problems of getting stuck at local optima. The use of genetic algorithms (GAs) for evolving weights between layers of standard artificial neural networks (ANNs) is a well-established area of neuroevolution. In particular, the use of crossover techniques when optimizing weights can help to overcome problems of local optima. However, the application of GAs for evolving the weights of filters and kernels in CNNs is not yet an established area of neuroevolution. In this paper, a GA-based filter development algorithm is proposed. The results of the proof-of-concept experiments described in this paper show the proposed GA algorithm can find filter weights through evolutionary techniques rather than BP learning. For some simple classification tasks like geometric shape recognition, the proposed algorithm can achieve 100% accuracy. The results for MNIST classification, while not as good as possible through standard filter learning through BP, show that filter and kernel evolution warrants further investigation as a new subarea of neuroevolution for deep architectures.

Keywords: neuroevolution, convolutional neural network, genetic algorithm, filters, kernels

Procedia PDF Downloads 155
6704 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 258
6703 The Noun-Phrase Elements on the Usage of the Zero Article

Authors: Wen Zhen

Abstract:

Compared to content words, function words have been relatively overlooked by English learners especially articles. The article system, to a certain extent, becomes a resistance to know English better, driven by different elements. Three principal factors can be summarized in term of the nature of the articles when referring to the difficulty of the English article system. However, making the article system more complex are difficulties in the second acquisition process, for [-ART] learners have to create another category, causing even most non-native speakers at proficiency level to make errors. According to the sequences of acquisition of the English article, it is showed that the zero article is first acquired and in high inaccuracy. The zero article is often overused in the early stages of L2 acquisition. Although learners at the intermediate level move to underuse the zero article for they realize that the zero article does not cover any case, overproduction of the zero article even occurs among advanced L2 learners. The aim of the study is to investigate noun-phrase factors which give rise to incorrect usage or overuse of the zero article, thus providing suggestions for L2 English acquisition. Moreover, it enables teachers to carry out effective instruction that activate conscious learning of students. The research question will be answered through a corpus-based, data- driven approach to analyze the noun-phrase elements from the semantic context and countability of noun-phrases. Based on the analysis of the International Thurber Thesis corpus, the results show that: (1) Although context of [-definite,-specific] favored the zero article, both[-definite,+specific] and [+definite,-specific] showed less influence. When we reflect on the frequency order of the zero article , prototypicality plays a vital role in it .(2)EFL learners in this study have trouble classifying abstract nouns as countable. We can find that it will bring about overuse of the zero article when learners can not make clear judgements on countability altered from (+definite ) to (-definite).Once a noun is perceived as uncountable by learners, the choice would fall back on the zero article. These findings suggest that learners should be engaged in recognition of the countability of new vocabulary by explaining nouns in lexical phrases and explore more complex aspects such as analysis dependent on discourse.

Keywords: noun phrase, zero article, corpus, second language acquisition

Procedia PDF Downloads 221
6702 An Observational Study of Vitamin B12 Levels and Peripheral Neuropathy Profile in Patients of Diabetes Mellitus on Metformin Therapy

Authors: Kamesh Gupta, Nitin Jain, Anurag Rohatgi

Abstract:

Objective: To study Vitamin B12 levels and presence of peripheral neuropathy among diabetes mellitus patients on metformin therapy. Method: The observational study was conducted from November 2014 to March 2015. Patients were selected from the Lady Hardinge Medical College, Delhi, India. Exhaustive history regarding dietary habits and metformin usage was taken. Lab tests including HbA1c levels and Vit B12 assays were done, on the basis of which patients were classified into subgroups. Peripheral neuropathy was detected by both clinical scoring and electrophysiological studies. Appropriate Statistical analysis for observational studies was done to evaluate the data. Results: The average duration of metformin usage was higher in patients with definite B12 deficiency (9.4y) than patients with normal B12 levels (5.6 y). Patients in the definite B12 deficiency group had much higher incidence of neuropathy (89%) than patients with no deficiency (27%). The incidence of neuropathy was higher in cases with longer metformin usage (100% with 18-22y of use and 83% with 14-17y of use) than shorter periods (29% with 2-5y of use and 75% with 6-9y of use). Conclusion: Thus patients on long-term metformin therapy are at a high risk for Vitamin B12 deficiency. Definite and possible Vitamin B12 deficiency on metformin had an earlier onset of neuropathy than the subgroup with normal Vitamin B12 levels.

Keywords: diabetic neuroptahy, cobalamine deficiency, metformin, nerve conduction studies

Procedia PDF Downloads 336
6701 Numerical Applications of Tikhonov Regularization for the Fourier Multiplier Operators

Authors: Fethi Soltani, Adel Almarashi, Idir Mechai

Abstract:

Tikhonov regularization and reproducing kernels are the most popular approaches to solve ill-posed problems in computational mathematics and applications. And the Fourier multiplier operators are an essential tool to extend some known linear transforms in Euclidean Fourier analysis, as: Weierstrass transform, Poisson integral, Hilbert transform, Riesz transforms, Bochner-Riesz mean operators, partial Fourier integral, Riesz potential, Bessel potential, etc. Using the theory of reproducing kernels, we construct a simple and efficient representations for some class of Fourier multiplier operators Tm on the Paley-Wiener space Hh. In addition, we give an error estimate formula for the approximation and obtain some convergence results as the parameters and the independent variables approaches zero. Furthermore, using numerical quadrature integration rules to compute single and multiple integrals, we give numerical examples and we write explicitly the extremal function and the corresponding Fourier multiplier operators.

Keywords: fourier multiplier operators, Gauss-Kronrod method of integration, Paley-Wiener space, Tikhonov regularization

Procedia PDF Downloads 286
6700 The Role of Specificity in Mastering the English Article System

Authors: Sugene Kim

Abstract:

The English articles are taught as a binary system based on nominal countability and definiteness. Despite the detailed rules of prescriptive grammar, it has been consistently reported in the literature that their correct usage is extremely difficult to master even for advanced learners of English as a second language (ESL) or a foreign language (EFL). Given that an English sentence (except for an imperative) cannot be constructed without a noun, which is always paired with one of the indefinite, definite, and zero articles; it is essential to understand specifically what causes ESL/EFL learners to misuse them. To that end, this study examined EFL learners’ article use employing a one-group pre–post-test design. Forty-three Korean college students received instruction on correct English article usage for two 75-minute classes employing the binary schema set up for the study. They also practiced in class how to apply the rules as instructed. Then, the participants were assigned a forced-choice elicitation task, which was also used as a pre-test administered three months prior to the instruction. Unlike the pre-test on which they only chose the correct article for each of the 40 items, the post-instruction task additionally asked them to give written accounts of their decision-making procedure to choose the article as they did. The participants’ performance was scored manually by checking whether the answer given is correct or incorrect, and their written comments were first categorized using thematic analysis and then ranked by frequency. The analyses of the performance on the two tasks and the written think-aloud data suggested that EFL learners exhibit fluctuation between specificity and definiteness, overgeneralizing the use of the definite article for almost all cataphoric references. It was apparent that they have trouble distinguishing from the two concepts possibly because the former is almost never introduced in the grammar books or classes designed for ESL/EFL learners. Particularly, most participants were found to be ignorant of the possibility of using nouns as [+specific, –definite]. Not surprisingly, the correct answer rates for such nouns averaged out at 33% and 46% on the pre- and post-tests, respectively, which narrowly reach half the overall mean correct answer rates of 65% on the pre-test and 81% on the post-test. In addition, correct article use for specific indefinites was most impermeable to instruction when compared with nouns used as [–specific, –definite] or [± specific, +definite]. Such findings underline the necessity for expanding the binary schema to a ternary form that incorporates the specificity feature, albeit not morphologically marked in the English language.

Keywords: countability, definiteness, English articles, specificity, ternary system

Procedia PDF Downloads 102
6699 Mean Square Responses of a Cantilever Beam with Various Damping Mechanisms

Authors: Yaping Zhao, Yimin Zhang

Abstract:

In the present paper, the stationary random vibration of a uniform cantilever beam is investigated. Two types of damping mechanism, i.e. the external and internal viscous dampings, are taken into account simultaneously. The excitation form is the support motion, and it is ideal white. Because two type of damping mechanism are considered concurrently, the product of the modal damping ratio and the natural frequency is not a constant anymore. As a result, the infinite definite integral encountered in the process of computing the mean square response is more complex than that in the existing literature. One signal progress of this work is to have calculated these definite integrals accurately. The precise solution of the mean square response is thus obtained in the infinite series form finally. Numerical examples are supplied and the numerical outcomes acquired confirm the validity of the theoretical analyses.

Keywords: random vibration, cantilever beam, mean square response, white noise

Procedia PDF Downloads 361
6698 A Survey on Positive Real and Strictly Positive Real Scalar Transfer Functions

Authors: Mojtaba Hakimi-Moghaddam

Abstract:

Positive real and strictly positive real transfer functions are important concepts in the control theory. In this paper, the results of researches in these areas are summarized. Definitions together with their graphical interpretations are mentioned. The equivalent conditions in the frequency domain and state space representations are reviewed. Their equivalent electrical networks are explained. Also, a comprehensive discussion about a difference between behavior of real part of positive real and strictly positive real transfer functions in high frequencies is presented. Furthermore, several illustrative examples are given.

Keywords: real rational transfer functions, positive realness property, strictly positive realness property, equivalent conditions

Procedia PDF Downloads 356
6697 Turing Pattern in the Oregonator Revisited

Authors: Elragig Aiman, Dreiwi Hanan, Townley Stuart, Elmabrook Idriss

Abstract:

In this paper, we reconsider the analysis of the Oregonator model. We highlight an error in this analysis which leads to an incorrect depiction of the parameter region in which diffusion driven instability is possible. We believe that the cause of the oversight is the complexity of stability analyses based on eigenvalues and the dependence on parameters of matrix minors appearing in stability calculations. We regenerate the parameter space where Turing patterns can be seen, and we use the common Lyapunov function (CLF) approach, which is numerically reliable, to further confirm the dependence of the results on diffusion coefficients intensities.

Keywords: diffusion driven instability, common Lyapunov function (CLF), turing pattern, positive-definite matrix

Procedia PDF Downloads 330
6696 A Survey on Linear Time Invariant Multivariable Positive Real Systems

Authors: Mojtaba Hakimi-Moghaddam

Abstract:

Positive realness as the most important property of driving point impedance of passive electrical networks appears in the control systems stability theory in 1960’s. There are three important subsets of positive real (PR) systems are introduced by researchers, that is, loos-less positive real (LLPR) systems, weakly strictly positive real (WSPR) systems and strictly positive real (SPR) systems. In this paper, definitions, properties, lemmas, and theorems related to family of positive real systems are summarized. Properties in both frequency domain and state space representation of system are explained. Also, several illustrative examples are presented.

Keywords: real rational matrix transfer functions, positive realness property, strictly positive realness property, Hermitian form asymptotic property, pole-zero properties

Procedia PDF Downloads 244
6695 Spin Coherent States Without Squeezing

Authors: A. Dehghani, S. Shirin

Abstract:

We propose in this article a new configuration of quantum states, |α, β> := |α>×|β>. Which are composed of vector products of two different copies of spin coherent states, |α> and |β>. Some mathematical as well as physical properties of such states are discussed. For instance, it has been shown that the cross products of two coherent vectors remain coherent again. They admit a resolution of the identity through positive definite measures on the complex plane. They represent packets similar to the true coherent states, in other words we would not expect to take spin squeezing in any of the field quadratures Lˆx, Lˆy and Lˆz. Depending on the particular choice of parameters in the above scenarios, they can be converted into the so-called Dicke states which minimize the uncertainty relations of each pair of the angular momentum components.

Keywords: vector (Cross-)products, minimum uncertainty, angular momentum, measurement, Dicke states

Procedia PDF Downloads 379
6694 The Role of Leader, Member Exchange on Psychological Capital, Mediated by Person-Organisational Fit

Authors: Sonja Grobler

Abstract:

Background: Leadership and specifically Leader, member exchange has a definite impact on employee behaviour and attitudes, and specifically their state of psychological capital. The interactionist construct of person-organisational fit (P-O fit), consisting of a combination of supplementary fit (indirect fit or value congruence) and complementary fit (direct or person-job fit, as well as needs-supply fit) may, however, impact on the relationship between LMX and psychological capital. The unique permutations of these relationships are important not only for conceptualisation purposes but also for intervention design to enhance the employees’ psychological capital; this would contribute to positive employee behaviour and attitudes. Aim: The purpose of this study was to determine whether a relationship exists between Leader, Member Exchange (LMX) and psychological capital, with possible mediation by P-O fit. Setting: The research was conducted with ± 60 employees from each of 43 private sectors and four public sector organisations in South Africa. Method: This study utilised a positivist methodology based on an empirical approach while using a cross-sectional design and quantitative analysis. The sample is relatively representative (in terms of race, gender, and the South African work force), as it consisted of 60 employees from each of the 43 South African organisations that participated in the study, with 2 486 respondents in total. Results: Significant, positive relationships were found between LMX, P-O fit, and psychological capital. Additionally, it was found that P-O fit partially mediates the relationship between ethical leadership and supervisory trust, confirming the proposed model. Conclusion: A strong, positive relationship exists between LMX (consisting of Affect, Loyalty, Contribution, and Professional Respect) and psychological capital (consisting of Self-efficacy, Hope, Resilience and Optimism) which is partially mediated by P-O fit (consisting of supplementary fit and complementary fit).

Keywords: leader and member exchange, person-organisational fit, psychological capital, positive psychology, interactionist approach

Procedia PDF Downloads 125