Search results for: locally linear embedding
3572 Transient Hygrothermoelastic Behavior in an Infinite Annular Cylinder with Internal Heat Generation by Linear Dependence Theory of Coupled Heat and Moisture
Authors: Tasneem Firdous Islam, G. D. Kedar
Abstract:
The aim of this paper is to study the effect of internal heat generation in a transient infinitely long annular cylinder subjected to hygrothermal loadings. The linear dependence theory of moisture and temperature is derived based on Dufour and Soret effect. The meticulous solutions of temperature, moisture, and thermal stresses are procured by using the Hankel transform technique. The influence of the internal heat source on the radial aspect is examined for coupled and uncoupled cases. In the present study, the composite material T300/5208 is considered, and the coupled and uncoupled cases are analyzed. The results obtained are computed numerically and illustrated graphically.Keywords: temperature, moisture, hygrothermoelasticity, internal heat generation, annular cylinder
Procedia PDF Downloads 1153571 A Novel Image Steganography Method Based on Mandelbrot Fractal
Authors: Adnan H. M. Al-Helali, Hamza A. Ali
Abstract:
The growth of censorship and pervasive monitoring on the Internet, Steganography arises as a new means of achieving secret communication. Steganography is the art and science of embedding information within electronic media used by common applications and systems. Generally, hiding information of multimedia within images will change some of their properties that may introduce few degradation or unusual characteristics. This paper presents a new image steganography approach for hiding information of multimedia (images, text, and audio) using generated Mandelbrot Fractal image as a cover. The proposed technique has been extensively tested with different images. The results show that the method is a very secure means of hiding and retrieving steganographic information. Experimental results demonstrate that an effective improvement in the values of the Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE), Normalized Cross Correlation (NCC), and Image Fidelity (IF) over the pervious techniques.Keywords: fractal image, information hiding, Mandelbrot set fractal, steganography
Procedia PDF Downloads 6183570 Tensile strength and Elastic Modulus of Nanocomposites Based on Polypropylene/Linear Low Density Polyethylene/Titanium Dioxide Nanoparticles
Authors: Faramarz Ashenai Ghasemi, Ismail Ghasemi, Sajad Daneshpayeh
Abstract:
In this study, tensile strength and elastic modulus of nanocomposites based on polypropylene/ linear low density polyethylene/ nano titanium dioxide (PP/LLDPE/TiO2) were studied. The samples were produced using a co-rotating twin screw extruder including 0, 2, 4 Wt .% of nano particles, and 20, 40, 60 Wt.% of LLDPE. The styrene-ethylene-butylene-styrene (SEBS) was used as comptabiliser. Tensile strength and elastic modulus were evaluated. The results showed that modulus was increased by 7% with addition of nano particles in comparison to PP/LLDPE. In addition, tensile strength was decreased.Keywords: PP/LLDPE/TiO2, nanocomposites, elastic modulus, tensile strength
Procedia PDF Downloads 5283569 Fuzzy-Sliding Controller Design for Induction Motor Control
Authors: M. Bouferhane, A. Boukhebza, L. Hatab
Abstract:
In this paper, the position control of linear induction motor using fuzzy sliding mode controller design is proposed. First, the indirect field oriented control LIM is derived. Then, a designed sliding mode control system with an integral-operation switching surface is investigated, in which a simple adaptive algorithm is utilized for generalised soft-switching parameter. Finally, a fuzzy sliding mode controller is derived to compensate the uncertainties which occur in the control, in which the fuzzy logic system is used to dynamically control parameter settings of the SMC control law. The effectiveness of the proposed control scheme is verified by numerical simulation. The experimental results of the proposed scheme have presented good performances compared to the conventional sliding mode controller.Keywords: linear induction motor, vector control, backstepping, fuzzy-sliding mode control
Procedia PDF Downloads 4893568 Improving the Accuracy of Stress Intensity Factors Obtained by Scaled Boundary Finite Element Method on Hybrid Quadtree Meshes
Authors: Adrian W. Egger, Savvas P. Triantafyllou, Eleni N. Chatzi
Abstract:
The scaled boundary finite element method (SBFEM) is a semi-analytical numerical method, which introduces a scaling center in each element’s domain, thus transitioning from a Cartesian reference frame to one resembling polar coordinates. Consequently, an analytical solution is achieved in radial direction, implying that only the boundary need be discretized. The only limitation imposed on the resulting polygonal elements is that they remain star-convex. Further arbitrary p- or h-refinement may be applied locally in a mesh. The polygonal nature of SBFEM elements has been exploited in quadtree meshes to alleviate all issues conventionally associated with hanging nodes. Furthermore, since in 2D this results in only 16 possible cell configurations, these are precomputed in order to accelerate the forward analysis significantly. Any cells, which are clipped to accommodate the domain geometry, must be computed conventionally. However, since SBFEM permits polygonal elements, significantly coarser meshes at comparable accuracy levels are obtained when compared with conventional quadtree analysis, further increasing the computational efficiency of this scheme. The generalized stress intensity factors (gSIFs) are computed by exploiting the semi-analytical solution in radial direction. This is initiated by placing the scaling center of the element containing the crack at the crack tip. Taking an analytical limit of this element’s stress field as it approaches the crack tip, delivers an expression for the singular stress field. By applying the problem specific boundary conditions, the geometry correction factor is obtained, and the gSIFs are then evaluated based on their formal definition. Since the SBFEM solution is constructed as a power series, not unlike mode superposition in FEM, the two modes contributing to the singular response of the element can be easily identified in post-processing. Compared to the extended finite element method (XFEM) this approach is highly convenient, since neither enrichment terms nor a priori knowledge of the singularity is required. Computation of the gSIFs by SBFEM permits exceptional accuracy, however, when combined with hybrid quadtrees employing linear elements, this does not always hold. Nevertheless, it has been shown that crack propagation schemes are highly effective even given very coarse discretization since they only rely on the ratio of mode one to mode two gSIFs. The absolute values of the gSIFs may still be subject to large errors. Hence, we propose a post-processing scheme, which minimizes the error resulting from the approximation space of the cracked element, thus limiting the error in the gSIFs to the discretization error of the quadtree mesh. This is achieved by h- and/or p-refinement of the cracked element, which elevates the amount of modes present in the solution. The resulting numerical description of the element is highly accurate, with the main error source now stemming from its boundary displacement solution. Numerical examples show that this post-processing procedure can significantly improve the accuracy of the computed gSIFs with negligible computational cost even on coarse meshes resulting from hybrid quadtrees.Keywords: linear elastic fracture mechanics, generalized stress intensity factors, scaled finite element method, hybrid quadtrees
Procedia PDF Downloads 1463567 Estimation of Desktop E-Wastes in Delhi Using Multivariate Flow Analysis
Authors: Sumay Bhojwani, Ashutosh Chandra, Mamita Devaburman, Akriti Bhogal
Abstract:
This article uses the Material flow analysis for estimating e-wastes in the Delhi/NCR region. The Material flow analysis is based on sales data obtained from various sources. Much of the data available for the sales is unreliable because of the existence of a huge informal sector. The informal sector in India accounts for more than 90%. Therefore, the scope of this study is only limited to the formal one. Also, for projection of the sales data till 2030, we have used regression (linear) to avoid complexity. The actual sales in the years following 2015 may vary non-linearly but we have assumed a basic linear relation. The purpose of this study was to know an approximate quantity of desktop e-wastes that we will have by the year 2030 so that we start preparing ourselves for the ineluctable investment in the treatment of these ever-rising e-wastes. The results of this study can be used to install a treatment plant for e-wastes in Delhi.Keywords: e-wastes, Delhi, desktops, estimation
Procedia PDF Downloads 2583566 Effect of Mica Content in Sand on Site Response Analyses
Authors: Volkan Isbuga, Joman M. Mahmood, Ali Firat Cabalar
Abstract:
This study presents the site response analysis of mica-sand mixtures available in certain parts of the world including Izmir, a highly populated city and located in a seismically active region in western part of Turkey. We performed site response analyses by employing SHAKE, an equivalent linear approach, for the micaceous soil deposits consisting of layers with different amount of mica contents and thicknesses. Dynamic behavior of micaceous sands such as shear modulus reduction and damping ratio curves are input for the ground response analyses. Micaceous sands exhibit a unique dynamic response under a scenario earthquake with a magnitude of Mw=6. Results showed that higher amount of mica caused higher spectral accelerations.Keywords: micaceous sands, site response, equivalent linear approach, SHAKE
Procedia PDF Downloads 3403565 Dry Relaxation Shrinkage Prediction of Bordeaux Fiber Using a Feed Forward Neural
Authors: Baeza S. Roberto
Abstract:
The knitted fabric suffers a deformation in its dimensions due to stretching and tension factors, transverse and longitudinal respectively, during the process in rectilinear knitting machines so it performs a dry relaxation shrinkage procedure and thermal action of prefixed to obtain stable conditions in the knitting. This paper presents a dry relaxation shrinkage prediction of Bordeaux fiber using a feed forward neural network and linear regression models. Six operational alternatives of shrinkage were predicted. A comparison of the results was performed finding neural network models with higher levels of explanation of the variability and prediction. The presence of different reposes are included. The models were obtained through a neural toolbox of Matlab and Minitab software with real data in a knitting company of Southern Guanajuato. The results allow predicting dry relaxation shrinkage of each alternative operation.Keywords: neural network, dry relaxation, knitting, linear regression
Procedia PDF Downloads 5843564 Influence of Internal Heat Source on Thermal Instability in a Horizontal Porous Layer with Mass Flow and Inclined Temperature Gradient
Authors: Anjanna Matta, P. A. L. Narayana
Abstract:
An investigation has been presented to analyze the effect of internal heat source on the onset of Hadley-Prats flow in a horizontal fluid saturated porous medium. We examine a better understanding of the combined influence of the heat source and mass flow effect by using linear stability analysis. The resultant eigenvalue problem is solved by using shooting and Runga-Kutta methods for evaluate critical thermal Rayleight number with respect to various flow governing parameters. It is identified that the flow is switch from stabilizing to destabilizing as the horizontal thermal Rayleigh number is enhanced. The heat source and mass flow increases resulting a stronger destabilizing effect.Keywords: linear stability analysis, heat source, porous medium, mass flow
Procedia PDF Downloads 7213563 Linear fractional differential equations for second kind modified Bessel functions
Authors: Jorge Olivares, Fernando Maass, Pablo Martin
Abstract:
Fractional derivatives have been considered recently as a way to solve different problems in Engineering. In this way, second kind modified Bessel functions are considered here. The order α fractional differential equations of second kind Bessel functions, Kᵥ(x), are studied with simple initial conditions. The Laplace transform and Caputo definition of fractional derivatives are considered. Solutions have been found for ν=1/3, 1/2, 2/3, -1/3, -1/2 and (-2/3). In these cases, the solutions are the sum of two hypergeometric functions. The α fractional derivatives have been for α=1/3, 1/2 and 2/3, and the above values of ν. No convergence has been found for the integer values of ν Furthermore when α has been considered as a rational found m/p, no general solution has been found. Clearly, this case is more difficult to treat than those of first kind Bessel Function.Keywords: Caputo, modified Bessel functions, hypergeometric, linear fractional differential equations, transform Laplace
Procedia PDF Downloads 3423562 Evaluation of Natural Waste Materials for Ammonia Removal in Biofilters
Authors: R. F. Vieira, D. Lopes, I. Baptista, S. A. Figueiredo, V. F. Domingues, R. Jorge, C. Delerue-matos, O. M. Freitas
Abstract:
Odours are generated in municipal solid wastes management plants as a result of decomposition of organic matter, especially when anaerobic degradation occurs. Information was collected about the substances and respective concentration in the surrounding atmosphere of some management plants. The main components which are associated with these unpleasant odours were identified: ammonia, hydrogen sulfide and mercaptans. The first is the most common and the one that presents the highest concentrations, reaching values of 700 mg/m3. Biofiltration, which involves simultaneously biodegradation, absorption and adsorption processes, is a sustainable technology for the treatment of these odour emissions when a natural packing material is used. The packing material should ideally be cheap, durable, and allow the maximum microbiological activity and adsorption/absorption. The presence of nutrients and water is required for biodegradation processes. Adsorption and absorption are enhanced by high specific surface area, high porosity and low density. The main purpose of this work is the exploitation of natural waste materials, locally available, as packing media: heather (Erica lusitanica), chestnut bur (from Castanea sativa), peach pits (from Prunus persica) and eucalyptus bark (from Eucalyptus globulus). Preliminary batch tests of ammonia removal were performed in order to select the most interesting materials for biofiltration, which were then characterized. The following physical and chemical parameters were evaluated: density, moisture, pH, buffer and water retention capacity. The determination of equilibrium isotherms and the adjustment to Langmuir and Freundlich models was also performed. Both models can fit the experimental results. Based both in the material performance as adsorbent and in its physical and chemical characteristics, eucalyptus bark was considered the best material. It presents a maximum adsorption capacity of 0.78±0.45 mol/kg for ammonia. The results from its characterization are: 121 kg/m3 density, 9.8% moisture, pH equal to 5.7, buffer capacity of 0.370 mmol H+/kg of dry matter and water retention capacity of 1.4 g H2O/g of dry matter. The application of natural materials locally available, with little processing, in biofiltration is an economic and sustainable alternative that should be explored.Keywords: ammonia removal, biofiltration, natural materials, odour control
Procedia PDF Downloads 3683561 Estimation of Harmonics in Three-Phase and Six-Phase-Phase (Multi-Phase) Load Circuits
Authors: Zakir Husain, Deepak Kumar
Abstract:
The harmonics are very harmful within an electrical system and can have serious consequences such as reducing the life of apparatus, stress on cable and equipment etc. This paper cites extensive analytical study of harmonic characteristics of multiphase (six-phase) and three-phase system equipped with two and three level inverters for non-linear loads. Multilevel inverter has elevated voltage capability with voltage limited devices, low harmonic distortion, abridged switching losses. Multiphase technology also pays a promising role in harmonic reduction. Matlab simulation is carried out to compare the advantage of multi-phase over three phase systems equipped with two or three level inverters for non-linear load harmonic reduction. The extensive simulation results are presented based on case studies.Keywords: fast Fourier transform (FFT), harmonics, inverter, ripples, total harmonic distortion (THD)
Procedia PDF Downloads 5523560 Automated Localization of Palpebral Conjunctiva and Hemoglobin Determination Using Smart Phone Camera
Authors: Faraz Tahir, M. Usman Akram, Albab Ahmad Khan, Mujahid Abbass, Ahmad Tariq, Nuzhat Qaiser
Abstract:
The objective of this study was to evaluate the Degree of anemia by taking the picture of the palpebral conjunctiva using Smartphone Camera. We have first localized the region of interest from the image and then extracted certain features from that Region of interest and trained SVM classifier on those features and then, as a result, our system classifies the image in real-time on their level of hemoglobin. The proposed system has given an accuracy of 70%. We have trained our classifier on a locally gathered dataset of 30 patients.Keywords: anemia, palpebral conjunctiva, SVM, smartphone
Procedia PDF Downloads 5053559 Predicting Daily Patient Hospital Visits Using Machine Learning
Authors: Shreya Goyal
Abstract:
The study aims to build user-friendly software to understand patient arrival patterns and compute the number of potential patients who will visit a particular health facility for a given period by using a machine learning algorithm. The underlying machine learning algorithm used in this study is the Support Vector Machine (SVM). Accurate prediction of patient arrival allows hospitals to operate more effectively, providing timely and efficient care while optimizing resources and improving patient experience. It allows for better allocation of staff, equipment, and other resources. If there's a projected surge in patients, additional staff or resources can be allocated to handle the influx, preventing bottlenecks or delays in care. Understanding patient arrival patterns can also help streamline processes to minimize waiting times for patients and ensure timely access to care for patients in need. Another big advantage of using this software is adhering to strict data protection regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States as the hospital will not have to share the data with any third party or upload it to the cloud because the software can read data locally from the machine. The data needs to be arranged in. a particular format and the software will be able to read the data and provide meaningful output. Using software that operates locally can facilitate compliance with these regulations by minimizing data exposure. Keeping patient data within the hospital's local systems reduces the risk of unauthorized access or breaches associated with transmitting data over networks or storing it in external servers. This can help maintain the confidentiality and integrity of sensitive patient information. Historical patient data is used in this study. The input variables used to train the model include patient age, time of day, day of the week, seasonal variations, and local events. The algorithm uses a Supervised learning method to optimize the objective function and find the global minima. The algorithm stores the values of the local minima after each iteration and at the end compares all the local minima to find the global minima. The strength of this study is the transfer function used to calculate the number of patients. The model has an output accuracy of >95%. The method proposed in this study could be used for better management planning of personnel and medical resources.Keywords: machine learning, SVM, HIPAA, data
Procedia PDF Downloads 653558 Stability of Hybrid Stochastic Systems
Authors: Manlika Ratchagit
Abstract:
This paper is concerned with robust mean square stability of uncertain stochastic switched discrete time-delay systems. The system to be considered is subject to interval time-varying delays, which allows the delay to be a fast time-varying function and the lower bound is not restricted to zero. Based on the discrete Lyapunov functional, a switching rule for the robust mean square stability for the uncertain stochastic discrete time-delay system is designed via linear matrix inequalities. Finally, some examples are exploited to illustrate the effectiveness of the proposed schemes.Keywords: robust mean square stability, discrete-time stochastic systems, hybrid systems, interval time-varying delays, Lyapunov functional, linear matrix inequalities
Procedia PDF Downloads 4853557 Non-Linear Dynamic Analyses of Grouted Pile-Sleeve Connection
Authors: Mogens Saberi
Abstract:
The focus of this article is to present the experience gained from the design of a grouted pile-sleeve connection and to present simple design expressions which can be used in the preliminary design phase of such connections. The grout pile-sleeve connection serves as a connection between an offshore jacket foundation and pre-installed piles located in the seabed. The jacket foundation supports a wind turbine generator resulting in significant dynamic loads on the connection. The connection is designed with shear keys in order to optimize the overall design but little experience is currently available in the use of shear keys in such connections. It is found that the consequence of introducing shear keys in the design is a very complex stress distribution which requires special attention due to significant fatigue loads. An optimal geometrical shape of the shear keys is introduced in order to avoid large stress concentration factors and a relatively easy fabrication. The connection is analysed in ANSYS Mechanical where the grout is modelled by a non-linear material model which allows for cracking of the grout material and captures the elastic-plastic behaviour of the grout material. Special types of finite elements are used in the interface between the pile sleeve and the grout material to model the slip surface between the grout material and the steel. Based on the performed finite element modelling simple design expressions are introduced.Keywords: fatigue design, non-linear finite element modelling, structural dynamics, simple design expressions
Procedia PDF Downloads 3843556 New Results on Stability of Hybrid Stochastic Systems
Authors: Manlika Rajchakit
Abstract:
This paper is concerned with robust mean square stability of uncertain stochastic switched discrete time-delay systems. The system to be considered is subject to interval time-varying delays, which allows the delay to be a fast time-varying function and the lower bound is not restricted to zero. Based on the discrete Lyapunov functional, a switching rule for the robust mean square stability for the uncertain stochastic discrete time-delay system is designed via linear matrix inequalities. Finally, some examples are exploited to illustrate the effectiveness of the proposed schemes.Keywords: robust mean square stability, discrete-time stochastic systems, hybrid systems, interval time-varying delays, lyapunov functional, linear matrix inequalities
Procedia PDF Downloads 4293555 Loss of the Skin Barrier after Dermal Application of the Low Molecular Methyl Siloxanes: Volatile Methyl Siloxanes, VMS Silicones
Authors: D. Glamowska, K. Szymkowska, K. Mojsiewicz- Pieńkowska, K. Cal, Z. Jankowski
Abstract:
Introduction: The integrity of the outermost layer of skin (stratum corneum) is vital to the penetration of various compounds, including toxic substances. Barrier function of skin depends of its structure. The barrier function of the stratum corneum is provided by patterned lipid lamellae (binlayer). However, a lot of substances, including the low molecular methyl siloxanes (volatile methyl siloxanes, VMS) have an impact on alteration the skin barrier due to damage of stratum corneum structure. VMS belong to silicones. They are widely used in the pharmaceutical as well as cosmetic industry. Silicones fulfill the role of ingredient or excipient in medicinal products and the excipient in personal care products. Due to the significant human exposure to this group of compounds, an important aspect is toxicology of the compounds and safety assessment of products. Silicones in general opinion are considered as a non-toxic substances, but there are some data about their negative effect on living organisms through the inhaled or oral application. However, the transdermal route has not been described in the literature as a possible alternative route of penetration. The aim of the study was to verify the possibility of penetration of the stratum corneum, further permeation into the deeper layers of the skin (epidermis and dermis) as well as to the fluid acceptor by VMS. Methods: Research methodology was developed based on the OECD and WHO guidelines. In ex-vivo study, the fluorescence microscope and ATR FT-IR spectroscopy was used. The Franz- type diffusion cells were used to application of the VMS on the sample of human skin (A=0.65 cm) for 24h. The stratum corneum at the application site was tape-stripped. After separation of epidermis, relevant dyes: fluorescein, sulforhodamine B, rhodamine B hexyl ester were put on and observations were carried in the microscope. To confirm the penetration and permeation of the cyclic or linear VMS and thus the presence of silicone in the individual layers of the skin, spectra ATR FT-IR of the sample after application of silicone and H2O (control sample) were recorded. The research included comparison of the intesity of bands in characteristic positions for silicones (1263 cm-1, 1052 cm-1 and 800 cm-1). Results: and Conclusions The results present that cyclic and linear VMS are able to overcome the barrier of the skin. Influence of them on damage of corneocytes of the stratum corneum was observed. This phenomenon was due to distinct disturbances in the lipid structure of the stratum corneum. The presence of cyclic and linear VMS were identified in the stratum corneum, epidermis as well as in the dermis by both fluorescence microscope and ATR FT-IR spectroscopy. This confirms that the cyclic and linear VMS can penetrate to stratum corneum and permeate through the human skin layers. Apart from this they cause changes in the structure of the skin. Results show to possible absorption into the blood and lymphathic vessels by the VMS with linear and cyclic structure.Keywords: low molecular methyl siloxanes, volatile methyl siloxanes, linear and cyclic siloxanes, skin penetration, skin permeation
Procedia PDF Downloads 3443554 “Multi-Sonic Timbre” of the Biula: The Integral Role of of Tropical Tonewood in Bajau Sama Dilaut Bowed Lute Acoustics
Authors: Wong Siew Ngan, Lee Chie Tsang, Lee See Ling, Lim Ho Yi
Abstract:
The selection of Tonewood is critical in defining tonal and acoustic qualities of string instruments, yet limited research exists on indigenous instruments utilizing tropical woods. This gap is addressed by analyzing the "multi-sonic timbre" of the Biula (Bajau Sama Dilaut), crafted by rainforest indigenous communities using locally accessible tropical species such as jackfruit and coconut, whose distinctive grain patterns, density, and moisture content, significantly contribute to the instrument’s rich harmonic spectrum and dynamic range. Unlike Western violins that utilize temperate woods like Maple and Spruce, the Biula's sound is shaped by the unique acoustic properties of these tropical tonewoods. To further investigate the impact of tropical tonewoods on the biula’s acoustics, frequency response tests were conducted on instruments constructed from various local species using SPEAR (Sinusoidal Partial Editing Analysis and Resynthesis) software for spectral analysis, measurements were taken of resonance frequencies, harmonic content, and sound decay rates. These analyses reveal that jackfruit wood produces warmer tones with enhanced lower frequencies, while coconut wood contributes to brighter timbres with pronounced higher harmonics. Building upon these findings, the materials and construction methods of biula bows were also examined. The study found that the variations in tropical hardwoods and locally sourced bow hair significantly influence the instrument's responsiveness and articulation, shaping its distinctive 'multi-sonic timbre.' These findings deepen the understanding of indigenous instrument acoustics, offering valuable insights for modern luthiers interested in tropical tonewoods. By documenting traditional crafting techniques, this research supports the preservation of cultural heritage and promotes appreciation of indigenous craftsmanship.Keywords: multi-sonic timbre, biula (bajau sama dilaut bowed lute), tropical tonewoods, spectral analysis, indigenous instrument acoustics
Procedia PDF Downloads 83553 Generalized Correlation Coefficient in Genome-Wide Association Analysis of Cognitive Ability in Twins
Authors: Afsaneh Mohammadnejad, Marianne Nygaard, Jan Baumbach, Shuxia Li, Weilong Li, Jesper Lund, Jacob v. B. Hjelmborg, Lene Christensen, Qihua Tan
Abstract:
Cognitive impairment in the elderly is a key issue affecting the quality of life. Despite a strong genetic background in cognition, only a limited number of single nucleotide polymorphisms (SNPs) have been found. These explain a small proportion of the genetic component of cognitive function, thus leaving a large proportion unaccounted for. We hypothesize that one reason for this missing heritability is the misspecified modeling in data analysis concerning phenotype distribution as well as the relationship between SNP dosage and the phenotype of interest. In an attempt to overcome these issues, we introduced a model-free method based on the generalized correlation coefficient (GCC) in a genome-wide association study (GWAS) of cognitive function in twin samples and compared its performance with two popular linear regression models. The GCC-based GWAS identified two genome-wide significant (P-value < 5e-8) SNPs; rs2904650 near ZDHHC2 on chromosome 8 and rs111256489 near CD6 on chromosome 11. The kinship model also detected two genome-wide significant SNPs, rs112169253 on chromosome 4 and rs17417920 on chromosome 7, whereas no genome-wide significant SNPs were found by the linear mixed model (LME). Compared to the linear models, more meaningful biological pathways like GABA receptor activation, ion channel transport, neuroactive ligand-receptor interaction, and the renin-angiotensin system were found to be enriched by SNPs from GCC. The GCC model outperformed the linear regression models by identifying more genome-wide significant genetic variants and more meaningful biological pathways related to cognitive function. Moreover, GCC-based GWAS was robust in handling genetically related twin samples, which is an important feature in handling genetic confounding in association studies.Keywords: cognition, generalized correlation coefficient, GWAS, twins
Procedia PDF Downloads 1243552 Analysis of Constraints and Opportunities in Dairy Production in Botswana
Authors: Som Pal Baliyan
Abstract:
Dairy enterprise has been a major source of employment and income generation in most of the economies worldwide. Botswana government has also identified dairy as one of the agricultural sectors towards diversification of the mineral dependent economy of the country. The huge gap between local demand and supply of milk and milk products indicated that there are not only constraints but also; opportunities exist in this sub sector of agriculture. Therefore, this study was an attempt to identify constraints and opportunities in dairy production industry in Botswana. The possible ways to mitigate the constraints were also identified. The findings should assist the stakeholders especially, policy makers in the formulation of effective policies for the growth of dairy sector in the country. This quantitative study adopted a survey research design. A final survey followed by a pilot survey was conducted for data collection. The purpose of the pilot survey was to collect basic information on the nature and extent of the constraints, opportunities and ways to mitigate the constraints in dairy production. Based on the information from pilot survey, a four point Likert’s scale type questionnaire was constructed, validated and tested for its reliability. The data for the final survey were collected from purposively selected twenty five dairy farms. The descriptive statistical tools were employed to analyze data. Among the twelve constraints identified; high feed costs, feed shortage and availability, lack of technical support, lack of skilled manpower, high prevalence of pests and diseases and, lack of dairy related technologies were the six major constraints in dairy production. Grain feed production, roughage feed production, manufacturing of dairy feed, establishment of milk processing industry and, development of transportation systems were the five major opportunities among the eight opportunities identified. Increasing production of animal feed locally, increasing roughage feed production locally, provision of subsidy on animal feed, easy access to sufficient financial support, training of the farmers and, effective control of pests and diseases were identified as the six major ways to mitigate the constraints. It was recommended that the identified constraints and opportunities as well as the ways to mitigate the constraints need to be carefully considered by the stakeholders especially, policy makers during the formulation and implementation of the policies for the development of dairy sector in Botswana.Keywords: dairy enterprise, milk production, opportunities, production constraints
Procedia PDF Downloads 4043551 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation
Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro
Abstract:
This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.Keywords: acceptance, block size, mixed linear model, testing order, testing order
Procedia PDF Downloads 3213550 Adaptive Few-Shot Deep Metric Learning
Authors: Wentian Shi, Daming Shi, Maysam Orouskhani, Feng Tian
Abstract:
Whereas currently the most prevalent deep learning methods require a large amount of data for training, few-shot learning tries to learn a model from limited data without extensive retraining. In this paper, we present a loss function based on triplet loss for solving few-shot problem using metric based learning. Instead of setting the margin distance in triplet loss as a constant number empirically, we propose an adaptive margin distance strategy to obtain the appropriate margin distance automatically. We implement the strategy in the deep siamese network for deep metric embedding, by utilizing an optimization approach by penalizing the worst case and rewarding the best. Our experiments on image recognition and co-segmentation model demonstrate that using our proposed triplet loss with adaptive margin distance can significantly improve the performance.Keywords: few-shot learning, triplet network, adaptive margin, deep learning
Procedia PDF Downloads 1693549 Investigation a New Approach "AGM" to Solve of Complicate Nonlinear Partial Differential Equations at All Engineering Field and Basic Science
Authors: Mohammadreza Akbari, Pooya Soleimani Besheli, Reza Khalili, Davood Domiri Danji
Abstract:
In this conference, our aims are accuracy, capabilities and power at solving of the complicated non-linear partial differential. Our purpose is to enhance the ability to solve the mentioned nonlinear differential equations at basic science and engineering field and similar issues with a simple and innovative approach. As we know most of engineering system behavior in practical are nonlinear process (especially basic science and engineering field, etc.) and analytical solving (no numeric) these problems are difficult, complex, and sometimes impossible like (Fluids and Gas wave, these problems can't solve with numeric method, because of no have boundary condition) accordingly in this symposium we are going to exposure an innovative approach which we have named it Akbari-Ganji's Method or AGM in engineering, that can solve sets of coupled nonlinear differential equations (ODE, PDE) with high accuracy and simple solution and so this issue will emerge after comparing the achieved solutions by Numerical method (Runge-Kutta 4th). Eventually, AGM method will be proved that could be created huge evolution for researchers, professors and students in whole over the world, because of AGM coding system, so by using this software we can analytically solve all complicated linear and nonlinear partial differential equations, with help of that there is no difficulty for solving all nonlinear differential equations. Advantages and ability of this method (AGM) as follow: (a) Non-linear Differential equations (ODE, PDE) are directly solvable by this method. (b) In this method (AGM), most of the time, without any dimensionless procedure, we can solve equation(s) by any boundary or initial condition number. (c) AGM method always is convergent in boundary or initial condition. (d) Parameters of exponential, Trigonometric and Logarithmic of the existent in the non-linear differential equation with AGM method no needs Taylor expand which are caused high solve precision. (e) AGM method is very flexible in the coding system, and can solve easily varieties of the non-linear differential equation at high acceptable accuracy. (f) One of the important advantages of this method is analytical solving with high accuracy such as partial differential equation in vibration in solids, waves in water and gas, with minimum initial and boundary condition capable to solve problem. (g) It is very important to present a general and simple approach for solving most problems of the differential equations with high non-linearity in engineering sciences especially at civil engineering, and compare output with numerical method (Runge-Kutta 4th) and Exact solutions.Keywords: new approach, AGM, sets of coupled nonlinear differential equation, exact solutions, numerical
Procedia PDF Downloads 4633548 Chemometric QSRR Evaluation of Behavior of s-Triazine Pesticides in Liquid Chromatography
Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević
Abstract:
This study considers the selection of the most suitable in silico molecular descriptors that could be used for s-triazine pesticides characterization. Suitable descriptors among topological, geometrical and physicochemical are used for quantitative structure-retention relationships (QSRR) model establishment. Established models were obtained using linear regression (LR) and multiple linear regression (MLR) analysis. In this paper, MLR models were established avoiding multicollinearity among the selected molecular descriptors. Statistical quality of established models was evaluated by standard and cross-validation statistical parameters. For detection of similarity or dissimilarity among investigated s-triazine pesticides and their classification, principal component analysis (PCA) and hierarchical cluster analysis (HCA) were used and gave similar grouping. This study is financially supported by COST action TD1305.Keywords: chemometrics, classification analysis, molecular descriptors, pesticides, regression analysis
Procedia PDF Downloads 3923547 Object-Scene: Deep Convolutional Representation for Scene Classification
Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang
Abstract:
Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization
Procedia PDF Downloads 3313546 Statistical Analysis and Impact Forecasting of Connected and Autonomous Vehicles on the Environment: Case Study in the State of Maryland
Authors: Alireza Ansariyar, Safieh Laaly
Abstract:
Over the last decades, the vehicle industry has shown increased interest in integrating autonomous, connected, and electrical technologies in vehicle design with the primary hope of improving mobility and road safety while reducing transportation’s environmental impact. Using the State of Maryland (M.D.) in the United States as a pilot study, this research investigates CAVs’ fuel consumption and air pollutants (C.O., PM, and NOx) and utilizes meaningful linear regression models to predict CAV’s environmental effects. Maryland transportation network was simulated in VISUM software, and data on a set of variables were collected through a comprehensive survey. The number of pollutants and fuel consumption were obtained for the time interval 2010 to 2021 from the macro simulation. Eventually, four linear regression models were proposed to predict the amount of C.O., NOx, PM pollutants, and fuel consumption in the future. The results highlighted that CAVs’ pollutants and fuel consumption have a significant correlation with the income, age, and race of the CAV customers. Furthermore, the reliability of four statistical models was compared with the reliability of macro simulation model outputs in the year 2030. The error of three pollutants and fuel consumption was obtained at less than 9% by statistical models in SPSS. This study is expected to assist researchers and policymakers with planning decisions to reduce CAV environmental impacts in M.D.Keywords: connected and autonomous vehicles, statistical model, environmental effects, pollutants and fuel consumption, VISUM, linear regression models
Procedia PDF Downloads 4453545 Authentication of Physical Objects with Dot-Based 2D Code
Authors: Michał Glet, Kamil Kaczyński
Abstract:
Counterfeit goods and documents are a global problem, which needs more and more sophisticated methods of resolving it. Existing techniques using watermarking or embedding symbols on objects are not suitable for all use cases. To address those special needs, we created complete system allowing authentication of paper documents and physical objects with flat surface. Objects are marked using orientation independent and resistant to camera noise 2D graphic codes, named DotAuth. Based on the identifier stored in 2D code, the system is able to perform basic authentication and allows to conduct more sophisticated analysis methods, e.g., relying on augmented reality and physical properties of the object. In this paper, we present the complete architecture, algorithms and applications of the proposed system. Results of the features comparison of the proposed solution and other products are presented as well, pointing to the existence of many advantages that increase usability and efficiency in the means of protecting physical objects.Keywords: anti-forgery, authentication, paper documents, security
Procedia PDF Downloads 1333544 Contrasted Mean and Median Models in Egyptian Stock Markets
Authors: Mai A. Ibrahim, Mohammed El-Beltagy, Motaz Khorshid
Abstract:
Emerging Markets return distributions have shown significance departure from normality were they are characterized by fatter tails relative to the normal distribution and exhibit levels of skewness and kurtosis that constitute a significant departure from normality. Therefore, the classical Markowitz Mean-Variance is not applicable for emerging markets since it assumes normally-distributed returns (with zero skewness and kurtosis) and a quadratic utility function. Moreover, the Markowitz mean-variance analysis can be used in cases of moderate non-normality and it still provides a good approximation of the expected utility, but it may be ineffective under large departure from normality. Higher moments models and median models have been suggested in the literature for asset allocation in this case. Higher moments models have been introduced to account for the insufficiency of the description of a portfolio by only its first two moments while the median model has been introduced as a robust statistic which is less affected by outliers than the mean. Tail risk measures such as Value-at Risk (VaR) and Conditional Value-at-Risk (CVaR) have been introduced instead of Variance to capture the effect of risk. In this research, higher moment models including the Mean-Variance-Skewness (MVS) and Mean-Variance-Skewness-Kurtosis (MVSK) are formulated as single-objective non-linear programming problems (NLP) and median models including the Median-Value at Risk (MedVaR) and Median-Mean Absolute Deviation (MedMAD) are formulated as a single-objective mixed-integer linear programming (MILP) problems. The higher moment models and median models are compared to some benchmark portfolios and tested on real financial data in the Egyptian main Index EGX30. The results show that all the median models outperform the higher moment models were they provide higher final wealth for the investor over the entire period of study. In addition, the results have confirmed the inapplicability of the classical Markowitz Mean-Variance to the Egyptian stock market as it resulted in very low realized profits.Keywords: Egyptian stock exchange, emerging markets, higher moment models, median models, mixed-integer linear programming, non-linear programming
Procedia PDF Downloads 3143543 Optimality Conditions for Weak Efficient Solutions Generated by a Set Q in Vector Spaces
Authors: Elham Kiyani, S. Mansour Vaezpour, Javad Tavakoli
Abstract:
In this paper, we first introduce a new distance function in a linear space not necessarily endowed with a topology. The algebraic concepts of interior and closure are useful to study optimization problems without topology. So, we define Q-weak efficient solutions generated by the algebraic interior of a set Q, where Q is not necessarily convex. Studying nonconvex vector optimization is valuable since, for a convex cone K in topological spaces, we have int(K)=cor(K), which means that topological interior of a convex cone K is equal to the algebraic interior of K. Moreover, we used the scalarization technique including the distance function generated by the vectorial closure of a set to characterize these Q-weak efficient solutions. Scalarization is a useful approach for solving vector optimization problems. This technique reduces the optimization problem to a scalar problem which tends to be an optimization problem with a real-valued objective function. For instance, Q-weak efficient solutions of vector optimization problems can be characterized and computed as solutions of appropriate scalar optimization problems. In the convex case, linear functionals can be used as objective functionals of the scalar problems. But in the nonconvex case, we should present a suitable objective function. It is the aim of this paper to present a new distance function that be useful to obtain sufficient and necessary conditions for Q-weak efficient solutions of general optimization problems via scalarization.Keywords: weak efficient, algebraic interior, vector closure, linear space
Procedia PDF Downloads 228