Search results for: radial basis function networks
8610 Coordinated Interference Canceling Algorithm for Uplink Massive Multiple Input Multiple Output Systems
Authors: Messaoud Eljamai, Sami Hidouri
Abstract:
Massive multiple-input multiple-output (MIMO) is an emerging technology for new cellular networks such as 5G systems. Its principle is to use many antennas per cell in order to maximize the network's spectral efficiency. Inter-cellular interference remains a fundamental problem. The use of massive MIMO will not derogate from the rule. It improves performances only when the number of antennas is significantly greater than the number of users. This, considerably, limits the networks spectral efficiency. In this paper, a coordinated detector for an uplink massive MIMO system is proposed in order to mitigate the inter-cellular interference. The proposed scheme combines the coordinated multipoint technique with an interference-cancelling algorithm. It requires the serving cell to send their received symbols, after processing, decision and error detection, to the interfered cells via a backhaul link. Each interfered cell is capable of eliminating intercellular interferences by generating and subtracting the user’s contribution from the received signal. The resulting signal is more reliable than the original received signal. This allows the uplink massive MIMO system to improve their performances dramatically. Simulation results show that the proposed detector improves system spectral efficiency compared to classical linear detectors.Keywords: massive MIMO, COMP, interference canceling algorithm, spectral efficiency
Procedia PDF Downloads 1478609 Topology Optimization of the Interior Structures of Beams under Various Load and Support Conditions with Solid Isotropic Material with Penalization Method
Authors: Omer Oral, Y. Emre Yilmaz
Abstract:
Topology optimization is an approach that optimizes material distribution within a given design space for a certain load and boundary conditions by providing performance goals. It uses various restrictions such as boundary conditions, set of loads, and constraints to maximize the performance of the system. It is different than size and shape optimization methods, but it reserves some features of both methods. In this study, interior structures of the parts were optimized by using SIMP (Solid Isotropic Material with Penalization) method. The volume of the part was preassigned parameter and minimum deflection was the objective function. The basic idea behind the theory was considered, and different methods were discussed. Rhinoceros 3D design tool was used with Grasshopper and TopOpt plugins to create and optimize parts. A Grasshopper algorithm was designed and tested for different beams, set of arbitrary located forces and support types such as pinned, fixed, etc. Finally, 2.5D shapes were obtained and verified by observing the changes in density function.Keywords: Grasshopper, lattice structure, microstructures, Rhinoceros, solid isotropic material with penalization method, TopOpt, topology optimization
Procedia PDF Downloads 1368608 When Religion is Meaningful and When Religion is Detrimental
Authors: Tennyson Samraj
Abstract:
The intent of this paper is threefold: (1) to propose the Epicurean tenet that beliefs associated with God are to be detached from the transcendent God, as the basis to end religious conflicts; (2) to project John Hick’s advice that no one has monopoly over religious claims, as the basis for religious tolerance and (3) to present the common sense approach to respect religion without disrespecting science. Religious claims create societal tension on two matters: conflict between believers and conflict with the sciences. Anyone interested in the two fundamental questions related to consciousness and cosmology as to how and why the universe exists will have to deal with science and religion. However, while science addresses the question of how the universe came into existence and how it works, religion addresses the question of why the universe exists. If religion is a quest to understand why the universe exists, then we must address the question as to when religion is considered meaningful and when is it considered detrimental. Is there a relationship between why we choose to live and why the universe exists? Science and Religion are partners in defining our life in the context of the universe. Science without Religion limits itself to knowing ‘how’ the universe came into existence without questioning ‘why’; Religion without Science limits itself of knowing ‘why’ the universe exists without knowing ‘how.’ Is it possible to detach beliefs about God from God? When religious claims are understood in the context of the questions that necessitates the answers, religious claims can be understood as being separate from the transcendent God. This paper purports that this Epicurean tenet provides the impetus to address the questions that necessitate religious claims. This helps us to explain the relevance of why we believe in what we believe; define the relationship between the self, soul and the sacred; and establish the connection between this life and the after-life in the context of life-beyond-this-planet.Keywords: religion, epicurus, John Hick, relevance of religion
Procedia PDF Downloads 5488607 Investigation into the Optimum Hydraulic Loading Rate for Selected Filter Media Packed in a Continuous Upflow Filter
Authors: A. Alzeyadi, E. Loffill, R. Alkhaddar
Abstract:
Continuous upflow filters can combine the nutrient (nitrogen and phosphate) and suspended solid removal in one unit process. The contaminant removal could be achieved chemically or biologically; in both processes the filter removal efficiency depends on the interaction between the packed filter media and the influent. In this paper a residence time distribution (RTD) study was carried out to understand and compare the transfer behaviour of contaminants through a selected filter media packed in a laboratory-scale continuous up flow filter; the selected filter media are limestone and white dolomite. The experimental work was conducted by injecting a tracer (red drain dye tracer –RDD) into the filtration system and then measuring the tracer concentration at the outflow as a function of time; the tracer injection was applied at hydraulic loading rates (HLRs) (3.8 to 15.2 m h-1). The results were analysed according to the cumulative distribution function F(t) to estimate the residence time of the tracer molecules inside the filter media. The mean residence time (MRT) and variance σ2 are two moments of RTD that were calculated to compare the RTD characteristics of limestone with white dolomite. The results showed that the exit-age distribution of the tracer looks better at HLRs (3.8 to 7.6 m h-1) and (3.8 m h-1) for limestone and white dolomite respectively. At these HLRs the cumulative distribution function F(t) revealed that the residence time of the tracer inside the limestone was longer than in the white dolomite; whereas all the tracer took 8 minutes to leave the white dolomite at 3.8 m h-1. On the other hand, the same amount of the tracer took 10 minutes to leave the limestone at the same HLR. In conclusion, the determination of the optimal level of hydraulic loading rate, which achieved the better influent distribution over the filtration system, helps to identify the applicability of the material as filter media. Further work will be applied to examine the efficiency of the limestone and white dolomite for phosphate removal by pumping a phosphate solution into the filter at HLRs (3.8 to 7.6 m h-1).Keywords: filter media, hydraulic loading rate, residence time distribution, tracer
Procedia PDF Downloads 2778606 Oat βeta Glucan Attenuates the Development of Atherosclerosis and Improves the Intestinal Barrier Function by Reducing Bacterial Endotoxin Translocation in APOE-/- MICE
Authors: Dalal Alghawas, Jetty Lee, Kaisa Poutanen, Hani El-Nezami
Abstract:
Oat β-glucan a water soluble non starch linear polysaccharide has been approved as a cholesterol lowering agent by various food safety administrations and is commonly used to reduce the risk of heart disease. The molecular weight of oat β-glucan can vary depending on the extraction and fractionation methods. It is not clear whether the molecular weight has a significant impact at reducing the acceleration of atherosclerosis. The aim of this study was to investigate three different oat β-glucan fractionations on the development of atherosclerosis in vivo. With special focus on plaque stability and the intestinal barrier function. To test this, ApoE-/- female mice were fed a high fat diet supplemented with oat bran, high molecular weight (HMW) oat β-glucan fractionate and low molecular weight (LMW) oat β-glucan fractionate for 16 weeks. Atherosclerosis risk markers were measured in the plasma, heart and aortic tree. Plaque size was measured in the aortic root and aortic tree. ICAM-1, VCAM-1, E-Selectin, P-Selectin, protein levels were assessed from the aortic tree to determine plaque stability at 16 weeks. The expression of p22phox at the aortic root was evaluated to study the NADPH oxidase complex involved in nitric oxide bioavailability and vascular elasticity. The tight junction proteins E-cadherin and beta-catenin from western blot analyses were analysed as an intestinal barrier function test. Plasma LPS, intestinal D-lactate levels and hepatic FMO gene expression were carried out to confirm whether the compromised intestinal barrier lead to endotoxemia. The oat bran and HMW oat β-glucan diet groups were more effective than the LMW β-glucan diet group at reducing the plaque size and showed marked improvements in plaque stability. The intestinal barrier was compromised for all the experimental groups however the endotoxemia levels were higher in the LMW β-glucan diet group. The oat bran and HMW oat β-glucan diet groups were more effective at attenuating the development of atherosclerosis. Reasons for this could be due to the LMW oat β-glucan diet group’s low viscosity in the gut and the inability to block the reabsorption of cholesterol. Furthermore the low viscosity may allow more bacterial endotoxin translocation through the impaired intestinal barrier. In future food technologists should carefully consider how to incorporate LMW oat β-glucan as a health promoting food.Keywords: Atherosclerosis, beta glucan, endotoxemia, intestinal barrier function
Procedia PDF Downloads 4208605 Reliability Based Performance Evaluation of Stone Column Improved Soft Ground
Authors: A. GuhaRay, C. V. S. P. Kiranmayi, S. Rudraraju
Abstract:
The present study considers the effect of variation of different geotechnical random variables in the design of stone column-foundation systems for assessing the bearing capacity and consolidation settlement of highly compressible soil. The soil and stone column properties, spacing, diameter and arrangement of stone columns are considered as the random variables. Probability of failure (Pf) is computed for a target degree of consolidation and a target safe load by Monte Carlo Simulation (MCS). The study shows that the variation in coefficient of radial consolidation (cr) and cohesion of soil (cs) are two most important factors influencing Pf. If the coefficient of variation (COV) of cr exceeds 20%, Pf exceeds 0.001, which is unsafe following the guidelines of US Army Corps of Engineers. The bearing capacity also exceeds its safe value for COV of cs > 30%. It is also observed that as the spacing between the stone column increases, the probability of reaching a target degree of consolidation decreases. Accordingly, design guidelines, considering both consolidation and bearing capacity of improved ground, are proposed for different spacing and diameter of stone columns and geotechnical random variables.Keywords: bearing capacity, consolidation, geotechnical random variables, probability of failure, stone columns
Procedia PDF Downloads 3598604 Executive Deficits in Non-Clinical Hoarders
Authors: Thomas Heffernan, Nick Neave, Colin Hamilton, Gill Case
Abstract:
Hoarding is the acquisition of and failure to discard possessions, leading to excessive clutter and significant psychological/emotional distress. From a cognitive-behavioural approach, excessive hoarding arises from information-processing deficits, as well as from problems with emotional attachment to possessions and beliefs about the nature of possessions. In terms of information processing, hoarders have shown deficits in executive functions, including working memory, planning, inhibitory control, and cognitive flexibility. However, this previous research is often confounded by co-morbid factors such as anxiety, depression, or obsessive-compulsive disorder. The current study adopted a cognitive-behavioural approach, specifically assessing executive deficits and working memory in a non-clinical sample of hoarders, compared with non-hoarders. In this study, a non-clinical sample of 40 hoarders and 73 non-hoarders (defined by The Savings Inventory-Revised) completed the Adult Executive Functioning Inventory, which measures working memory and inhibition, Dysexecutive Questionnaire-Revised, which measures general executive function and the Hospital Anxiety and Depression Scale, which measures mood. The participant sample was made up of unpaid young adult volunteers who were undergraduate students and who completed the questionnaires on a university campus. The results revealed that, after observing no differences between hoarders and non-hoarders on age, sex, and mood, hoarders reported significantly more deficits in inhibitory control and general executive function when compared with non-hoarders. There was no between-group difference on general working memory. This suggests that non-clinical hoarders have a specific difficulty with inhibition-control, which enables you to resist repeated, unwanted urges. This might explain the hoarder’s inability to resist urges to buy and keep items that are no longer of any practical use. These deficits may be underpinned by general executive function deficiencies.Keywords: hoarding, memory, executive, deficits
Procedia PDF Downloads 1938603 Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition
Authors: A. Shoiynbek, K. Kozhakhmet, P. Menezes, D. Kuanyshbay, D. Bayazitov
Abstract:
Speech emotion recognition has received increasing research interest all through current years. There was used emotional speech that was collected under controlled conditions in most research work. Actors imitating and artificially producing emotions in front of a microphone noted those records. There are four issues related to that approach, namely, (1) emotions are not natural, and it means that machines are learning to recognize fake emotions. (2) Emotions are very limited by quantity and poor in their variety of speaking. (3) There is language dependency on SER. (4) Consequently, each time when researchers want to start work with SER, they need to find a good emotional database on their language. In this paper, we propose the approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describe the sequence of actions of the proposed approach. One of the first objectives of the sequence of actions is a speech detection issue. The paper gives a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian languages. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To illustrate the working capacity of the developed model, we have performed an analysis of speech detection and extraction from real tasks.Keywords: deep neural networks, speech detection, speech emotion recognition, Mel-frequency cepstrum coefficients, collecting speech emotion corpus, collecting speech emotion dataset, Kazakh speech dataset
Procedia PDF Downloads 1018602 Scientific and Technical Basis for the Application of Textile Structures in Glass Using Pate De Verre Technique
Authors: Walaa Hamed Mohamed Hamza
Abstract:
Textile structures are the way in which the threading process of both thread and loom is done together to form the woven. Different methods of attaching the clothing and the flesh produce different textile structures, which differ in their surface appearance from each other, including so-called simple textile structures. Textile compositions are the basis of woven fabric, through which aesthetic values can be achieved in the textile industry by weaving threads of yarn with the weft at varying degrees that may reach the total control of one of the two groups on the other. Hence the idea of how art and design can be used using different textile structures under the modern techniques of pate de verre. In the creation of designs suitable for glass products employed in the interior architecture. The problem of research: The textile structures, in general, have a significant impact on the appearance of the fabrics in terms of form and aesthetic. How can we benefit from the characteristics of different textile compositions in different glass designs with different artistic values. The research achieves its goal by the investment of simple textile structures in innovative artistic designs using the pate de verre technique, as well as the use of designs resulting from the textile structures in the external architecture to add various aesthetic values. The importance of research in the revival of heritage using ancient techniques, as well as synergy between different fields of applied arts such as glass and textile, and also study the different and diverse effects resulting from each fabric composition and the possibility of use in various designs in the interior architecture. The research will be achieved that by investing in simple textile compositions, innovative artistic designs produced using pate de verre technology can be used in interior architecture.Keywords: glass, interior architecture, pate de verre, textile structures
Procedia PDF Downloads 2958601 Designing Function Knitted and Woven Upholstery Textile With SCOPY Film
Authors: Manar Y. Abd El-Aziz, Alyaa E. Morgham, Amira A. El-Fallal, Heba Tolla E. Abo El Naga
Abstract:
Different textile materials are usually used in upholstery. However, upholstery parts may become unhealthy when dust accrues and bacteria raise on the surface, which negatively affects the user's health. Also, leather and artificial leather were used in upholstery but, leather has a high cost and artificial leather has a potential chemical risk for users. Researchers have advanced vegie leather made from bacterial cellulose a symbiotic culture of bacteria and yeast (SCOBY). SCOBY remains a gelatinous, cellulose biofilm discovered floating at the air-liquid interface of the container. But this leather still needs some enhancement for its mechanical properties. This study aimed to prepare SCOBY, produce bamboo rib knitted fabrics with two different stitch densities, and cotton woven fabric then laminate these fabrics with the prepared SCOBY film to enhance the mechanical properties of the SCOBY leather at the same time; add anti-microbial function to the prepared fabrics. Laboratory tests were conducted on the produced samples, including tests for function properties; anti-microbial, thermal conductivity and light transparency. Physical properties; thickness and mass per unit. Mechanical properties; elongation, tensile strength, young modulus, and peel force. The results showed that the type of the fabric affected significantly SCOBY properties. According to the test results, the bamboo knitted fabric with higher stitch density laminated with SCOBY was chosen for its tensile strength and elongation as the upholstery of a bed model with antimicrobial properties and comfortability in the headrest design. Also, the single layer of SCOBY was chosen regarding light transparency and lower thermal conductivity for the creation of a lighting unit built into the bed headboard.Keywords: anti-microbial, bamboo, rib, SCOPY, upholstery
Procedia PDF Downloads 648600 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas
Authors: Sahithi Yarlagadda
Abstract:
The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm
Procedia PDF Downloads 1108599 Role of Zinc Adminstration in Improvement of Faltering Growth in Egyption Children at Risk of Environmental Enteric Dysfunction
Authors: Ghada Mahmoud El Kassas, Maged Atta El Wakeel
Abstract:
Background: Environmental enteric dysfunction (EED) is impending trouble that flared up in the last decades to be pervasive in infants and children. EED is asymptomatic villous atrophy of the small bowel that is prevalent in the developing world and is associated with altered intestinal function and integrity. Evidence has suggested that supplementary zinc might ameliorate this damage by reducing gastrointestinal inflammation and may also benefit cognitive development. Objective: We tested whether zinc supplementation improves intestinal integrity, growth, and cognitive function in stunted children predicted to have EED. Methodology: This case–control prospective interventional study was conducted on 120 Egyptian Stunted children aged 1-10 years who recruited from the Nutrition clinic, the National research center, and 100 age and gender-matched healthy children as controls. At the primary phase of the study, Full history taking, clinical examination, and anthropometric measurements were done. Standard deviation score (SDS) for all measurements were calculated. Serum markers as Zonulin, Endotoxin core antibody (EndoCab), highly sensitive C-reactive protein (hsCRP), alpha1-acid glycoprotein (AGP), Tumor necrosis factor (TNF), and fecal markers such as myeloperoxidase (MPO), neopterin (NEO), and alpha-1-anti-trypsin (AAT) (as predictors of EED) were measured. Cognitive development was assessed (Bayley or Wechsler scores). Oral zinc at a dosage of 20 mg/d was supplemented to all cases and followed up for 6 months, after which the 2ry phase of the study included the previous clinical, laboratory, and cognitive assessment. Results: Serum and fecal inflammatory markers were significantly higher in cases compared to controls. Zonulin (P < 0.01), (EndoCab) (P < 0.001) and (AGP) (P < 0.03) markedly decreased in cases at the end of 2ry phase. Also (MPO), (NEO), and (AAT) showed a significant decline in cases at the end of the study (P < 0.001 for all). A significant increase in mid-upper arm circumference (MUAC) (P < 0.01), weight for age z-score, and skinfold thicknesses (P< 0.05 for both) was detected at end of the study, while height was not significantly affected. Cases also showed significant improvement of cognitive function at phase 2 of the study. Conclusion: Intestinal inflammatory state related to EED showed marked recovery after zinc supplementation. As a result, anthropometric and cognitive parameters showed obvious improvement with zinc supplementation.Keywords: stunting, cognitive function, environmental enteric dysfunction, zinc
Procedia PDF Downloads 1908598 Shuffled Structure for 4.225 GHz Antireflective Plates: A Proposal Proven by Numerical Simulation
Authors: Shin-Ku Lee, Ming-Tsu Ho
Abstract:
A newly proposed antireflective selector with shuffled structure is reported in this paper. The proposed idea is made of two different quarter wavelength (QW) slabs and numerically supported by the one-dimensional simulation results provided by the method of characteristics (MOC) to function as an antireflective selector. These two QW slabs are characterized by dielectric constants εᵣA and εᵣB, uniformly divided into N and N+1 pieces respectively which are then shuffled to form an antireflective plate with B(AB)N structure such that there is always one εᵣA piece between two εᵣB pieces. Another is A(BA)N structure where every εᵣB piece is sandwiched by two εᵣA pieces. Both proposed structures are numerically proved to function as QW plates. In order to allow maximum transmission through the proposed structures, the two dielectric constants are chosen to have the relation of (εᵣA)² = εᵣB > 1. The advantages of the proposed structures over the traditional anti-reflection coating techniques are two components with two thicknesses and to shuffle to form new QW structures. The design wavelength used to validate the proposed idea is 71 mm corresponding to a frequency about 4.225 GHz. The computational results are shown in both time and frequency domains revealing that the proposed structures produce minimum reflections around the frequency of interest.Keywords: method of characteristics, quarter wavelength, anti-reflective plate, propagation of electromagnetic fields
Procedia PDF Downloads 1468597 The Co-Simulation Interface SystemC/Matlab Applied in JPEG and SDR Application
Authors: Walid Hassairi, Moncef Bousselmi, Mohamed Abid
Abstract:
Functional verification is a major part of today’s system design task. Several approaches are available for verification on a high abstraction level, where designs are often modeled using MATLAB/Simulink. However, different approaches are a barrier to a unified verification flow. In this paper, we propose a co-simulation interface between SystemC and MATLAB and Simulink to enable functional verification of multi-abstraction levels designs. The resulting verification flow is tested on JPEG compression algorithm. The required synchronization of both simulation environments, as well as data type conversion is solved using the proposed co-simulation flow. We divided into two encoder jpeg parts. First implemented in SystemC which is the DCT is representing the HW part. Second, consisted of quantization and entropy encoding which is implemented in Matlab is the SW part. For communication and synchronization between these two parts we use S-Function and engine in Simulink matlab. With this research premise, this study introduces a new implementation of a Hardware SystemC of DCT. We compare the result of our simulation compared to SW / SW. We observe a reduction in simulation time you have 88.15% in JPEG and the design efficiency of the supply design is 90% in SDR.Keywords: hardware/software, co-design, co-simulation, systemc, matlab, s-function, communication, synchronization
Procedia PDF Downloads 4058596 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions
Authors: Valerii Dashuk
Abstract:
The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function
Procedia PDF Downloads 1748595 Improvement of Process Competitiveness Using Intelligent Reference Models
Authors: Julio Macedo
Abstract:
Several methodologies are now available to conceive the improvements of a process so that it becomes competitive as for example total quality, process reengineering, six sigma, define measure analysis improvement control method. These improvements are of different nature and can be external to the process represented by an optimization model or a discrete simulation model. In addition, the process stakeholders are several and have different desired performances for the process. Hence, the methodologies above do not have a tool to aid in the conception of the required improvements. In order to fill this void we suggest the use of intelligent reference models. A reference model is a set of qualitative differential equations and an objective function that minimizes the gap between the current and the desired performance indexes of the process. The reference models are intelligent so when they receive the current state of the problematic process and the desired performance indexes they generate the required improvements for the problematic process. The reference models are fuzzy cognitive maps added with an objective function and trained using the improvements implemented by the high performance firms. Experiments done in a set of students show the reference models allow them to conceive more improvements than students that do not use these models.Keywords: continuous improvement, fuzzy cognitive maps, process competitiveness, qualitative simulation, system dynamics
Procedia PDF Downloads 878594 Comparative Analysis of Islamic Bank in Indonesia and Malaysia with Risk Profile, Good Corporate Governance, Earnings, and Capital Method: Performance of Business Function and Social Function Perspective
Authors: Achsania Hendratmi, Nisful Laila, Fatin Fadhilah Hasib, Puji Sucia Sukmaningrum
Abstract:
This study aims to compare and see the differences between Islamic bank in Indonesia and Islamic bank in Malaysia using RGEC method (Risk Profile, Good Corporate Governance, Earnings, and Capital). This study examines the comparison in business and social performance of eleven Islamic banks in Indonesia and fifteen Islamic banks in Malaysia. This research used quantitative approach and the collections of data was done by collecting all the annual reports of banks that has been created as a sample over the period 2011-2015. The test result of the Independent Samples T-test and Mann-Whitney Test showed there were differences in the business performance of Islamic Bank in Indonesia and Malaysia as seen from the aspect of Risk profile (FDR), GCG, and Earnings (ROA). Also, there were differences of business and social performance as seen from Earnings (ROE), Capital (CAR), and Sharia Conformity Indicator (PSR and ZR) aspects.Keywords: business performance, Islamic banks, RGEC, social performance
Procedia PDF Downloads 2948593 Effect of Composition on Work Hardening Coefficient of Bismuth-Lead Binary Alloy
Authors: K. A. Mistry, I. B. Patel, A. H. Prajapati
Abstract:
In the present work, the alloy of Bismuth-lead is prepared on the basis of percentage of molecular weight 9:1, 5:5 and 1:9 ratios and grown by Zone- Refining Technique under a vacuum atmosphere. The EDAX of these samples are done and the results are reported. Micro hardness test has been used as an alternative test for measuring material’s tensile properties. The effect of temperature and load on the hardness of the grown alloy has been studied. Further the comparative studies of work hardening coefficients are reported. In the present work, the alloy of Bismuth-lead is prepared on the basis of percentage of molecular weight 9:1, 5:5 and 1:9 ratios and grown by Zone- Refining Technique under a vacuum atmosphere. The EDAX of these samples are done and the results are reported. Micro hardness test has been used as an alternative test for measuring material’s tensile properties. The effect of temperature and load on the hardness of the grown alloy has been studied. Further the comparative studies of work hardening coefficients are reported.Keywords: EDAX, hardening coefficient, micro hardness, Bi-Pb alloy
Procedia PDF Downloads 3068592 Local Spectrum Feature Extraction for Face Recognition
Authors: Muhammad Imran Ahmad, Ruzelita Ngadiran, Mohd Nazrin Md Isa, Nor Ashidi Mat Isa, Mohd ZaizuIlyas, Raja Abdullah Raja Ahmad, Said Amirul Anwar Ab Hamid, Muzammil Jusoh
Abstract:
This paper presents two technique, local feature extraction using image spectrum and low frequency spectrum modelling using GMM to capture the underlying statistical information to improve the performance of face recognition system. Local spectrum features are extracted using overlap sub block window that are mapping on the face image. For each of this block, spatial domain is transformed to frequency domain using DFT. A low frequency coefficient is preserved by discarding high frequency coefficients by applying rectangular mask on the spectrum of the facial image. Low frequency information is non Gaussian in the feature space and by using combination of several Gaussian function that has different statistical properties, the best feature representation can be model using probability density function. The recognition process is performed using maximum likelihood value computed using pre-calculate GMM components. The method is tested using FERET data sets and is able to achieved 92% recognition rates.Keywords: local features modelling, face recognition system, Gaussian mixture models, Feret
Procedia PDF Downloads 6678591 Tomato-Weed Classification by RetinaNet One-Step Neural Network
Authors: Dionisio Andujar, Juan lópez-Correa, Hugo Moreno, Angela Ri
Abstract:
The increased number of weeds in tomato crops highly lower yields. Weed identification with the aim of machine learning is important to carry out site-specific control. The last advances in computer vision are a powerful tool to face the problem. The analysis of RGB (Red, Green, Blue) images through Artificial Neural Networks had been rapidly developed in the past few years, providing new methods for weed classification. The development of the algorithms for crop and weed species classification looks for a real-time classification system using Object Detection algorithms based on Convolutional Neural Networks. The site study was located in commercial corn fields. The classification system has been tested. The procedure can detect and classify weed seedlings in tomato fields. The input to the Neural Network was a set of 10,000 RGB images with a natural infestation of Cyperus rotundus l., Echinochloa crus galli L., Setaria italica L., Portulaca oeracea L., and Solanum nigrum L. The validation process was done with a random selection of RGB images containing the aforementioned species. The mean average precision (mAP) was established as the metric for object detection. The results showed agreements higher than 95 %. The system will provide the input for an online spraying system. Thus, this work plays an important role in Site Specific Weed Management by reducing herbicide use in a single step.Keywords: deep learning, object detection, cnn, tomato, weeds
Procedia PDF Downloads 1038590 Classification of EEG Signals Based on Dynamic Connectivity Analysis
Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović
Abstract:
In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients
Procedia PDF Downloads 2148589 Generalized Extreme Value Regression with Binary Dependent Variable: An Application for Predicting Meteorological Drought Probabilities
Authors: Retius Chifurira
Abstract:
Logistic regression model is the most used regression model to predict meteorological drought probabilities. When the dependent variable is extreme, the logistic model fails to adequately capture drought probabilities. In order to adequately predict drought probabilities, we use the generalized linear model (GLM) with the quantile function of the generalized extreme value distribution (GEVD) as the link function. The method maximum likelihood estimation is used to estimate the parameters of the generalized extreme value (GEV) regression model. We compare the performance of the logistic and the GEV regression models in predicting drought probabilities for Zimbabwe. The performance of the regression models are assessed using the goodness-of-fit tests, namely; relative root mean square error (RRMSE) and relative mean absolute error (RMAE). Results show that the GEV regression model performs better than the logistic model, thereby providing a good alternative candidate for predicting drought probabilities. This paper provides the first application of GLM derived from extreme value theory to predict drought probabilities for a drought-prone country such as Zimbabwe.Keywords: generalized extreme value distribution, general linear model, mean annual rainfall, meteorological drought probabilities
Procedia PDF Downloads 2008588 Cloning and Expression of Azurin: A Protein Having Antitumor and Cell Penetrating Ability
Authors: Mohsina Akhter
Abstract:
Cancer has become a wide spread disease around the globe and takes many lives every year. Different treatments are being practiced but all have potential side effects with somewhat less specificity towards target sites. Pseudomonas aeruginosa is known to secrete a protein azurin with special anti-cancer function. It has unique cell penetrating peptide comprising of 18 amino acids that have ability to enter cancer cells specifically. Reported function of Azurin is to stabilize p53 inside the tumor cells and induces apoptosis through Bax mediated cytochrome c release from mitochondria. At laboratory scale, we have made recombinant azurin through cloning rpTZ57R/T-azu vector into E.coli strain DH-5α and subcloning rpET28-azu vector into E.coli BL21-CodonPlus (DE3). High expression was ensured with IPTG induction at different concentrations then optimized high expression level at 1mM concentration of IPTG for 5 hours. Purification has been done by using Ni+2 affinity chromatography. We have concluded that azurin can be a remarkable improvement in cancer therapeutics if it produces on a large scale. Azurin does not enter into the normal cells so it will prove a safe and secure treatment for patients and prevent them from hazardous anomalies.Keywords: azurin, pseudomonas aeruginosa, cancer, therapeutics
Procedia PDF Downloads 3118587 Effect of Rehabilitative Nursing Program on Pain Intensity and Functional Status among Patients with Discectomy
Authors: Amal Shehata
Abstract:
Low back pain related to disc prolapse is localized in the lumbar area and it may be radiated to the lower extremities, starting from neurons near or around the spinal canal. Most of the population may be affected with disc prolapse within their lifetime and leads to lost productivity, disability and loss of function. The study purpose was to examine the effect of rehabilitative nursing program on pain intensity and functional status among patients with discectomy. Design: Aquasi experimental design was utilized. Setting: The study was carried out at neurosurgery department and out patient's clinic of Menoufia University and Teaching hospitals at Menoufia governorate, Egypt. Instrument of the study: Five Instruments were used for data collection: Structured interviewing questionnaire, Functional assessment instrument, Observational check list, Numeric rating Scale and Oswestry low back pain disability questionnaire. Results: There was an improvement in mean total knowledge score about disease process, discectomy and rehabilitation program in study group (25.32%) than control group (7.32%). There was highly statistically significant improvement in lumbar flexibility among study group (80%) than control group (30%) after rehabilitation program than before. Also there was a decrease in pain score in study group (58% no pain) than control group (28% no pain) after rehabilitation program. There was an improvement in total disability score of study group (zero %) regarding effect of pain on the activity of daily living after rehabilitation program than control group (16%). Conclusion: Application of rehabilitative nursing program for patient with discectomy had proven a positive effect in relation to knowledge score, pain reduction, activity of daily living and functional abilities. Recommendation: A continuous rehabilitative nursing program should be carried out for all patients immediately after discectomy surgery on regular basis. Also A colored illustrated booklet about rehabilitation program should be available and distributed for all patients before surgery.Keywords: discectomy, rehabilitative nursing program, pain intensity, functional status
Procedia PDF Downloads 1418586 Refined Edge Detection Network
Authors: Omar Elharrouss, Youssef Hmamouche, Assia Kamal Idrissi, Btissam El Khamlichi, Amal El Fallah-Seghrouchni
Abstract:
Edge detection is represented as one of the most challenging tasks in computer vision, due to the complexity of detecting the edges or boundaries in real-world images that contains objects of different types and scales like trees, building as well as various backgrounds. Edge detection is represented also as a key task for many computer vision applications. Using a set of backbones as well as attention modules, deep-learning-based methods improved the detection of edges compared with the traditional methods like Sobel and Canny. However, images of complex scenes still represent a challenge for these methods. Also, the detected edges using the existing approaches suffer from non-refined results while the image output contains many erroneous edges. To overcome this, n this paper, by using the mechanism of residual learning, a refined edge detection network is proposed (RED-Net). By maintaining the high resolution of edges during the training process, and conserving the resolution of the edge image during the network stage, we make the pooling outputs at each stage connected with the output of the previous layer. Also, after each layer, we use an affined batch normalization layer as an erosion operation for the homogeneous region in the image. The proposed methods are evaluated using the most challenging datasets including BSDS500, NYUD, and Multicue. The obtained results outperform the designed edge detection networks in terms of performance metrics and quality of output images.Keywords: edge detection, convolutional neural networks, deep learning, scale-representation, backbone
Procedia PDF Downloads 1028585 Hybrid Gravity Gradient Inversion-Ant Colony Optimization Algorithm for Motion Planning of Mobile Robots
Authors: Meng Wu
Abstract:
Motion planning is a common task required to be fulfilled by robots. A strategy combining Ant Colony Optimization (ACO) and gravity gradient inversion algorithm is proposed for motion planning of mobile robots. In this paper, in order to realize optimal motion planning strategy, the cost function in ACO is designed based on gravity gradient inversion algorithm. The obstacles around mobile robot can cause gravity gradient anomalies; the gradiometer is installed on the mobile robot to detect the gravity gradient anomalies. After obtaining the anomalies, gravity gradient inversion algorithm is employed to calculate relative distance and orientation between mobile robot and obstacles. The relative distance and orientation deduced from gravity gradient inversion algorithm is employed as cost function in ACO algorithm to realize motion planning. The proposed strategy is validated by the simulation and experiment results.Keywords: motion planning, gravity gradient inversion algorithm, ant colony optimization
Procedia PDF Downloads 1378584 Optimizing the Public Policy Information System under the Environment of E-Government
Authors: Qian Zaijian
Abstract:
E-government is one of the hot issues in the current academic research of public policy and management. As the organic integration of information and communication technology (ICT) and public administration, e-government is one of the most important areas in contemporary information society. Policy information system is a basic subsystem of public policy system, its operation affects the overall effect of the policy process or even exerts a direct impact on the operation of a public policy and its success or failure. The basic principle of its operation is information collection, processing, analysis and release for a specific purpose. The function of E-government for public policy information system lies in the promotion of public access to the policy information resources, information transmission through e-participation, e-consultation in the process of policy analysis and processing of information and electronic services in policy information stored, to promote the optimization of policy information systems. However, due to many factors, the function of e-government to promote policy information system optimization has its practical limits. In the building of E-government in our country, we should take such path as adhering to the principle of freedom of information, eliminating the information divide (gap), expanding e-consultation, breaking down information silos and other major path, so as to promote the optimization of public policy information systems.Keywords: China, e-consultation, e-democracy, e-government, e-participation, ICTs, public policy information systems
Procedia PDF Downloads 8658583 Multiscale Syntheses of Knee Collateral Ligament Stresses: Aggregate Mechanics as a Function of Molecular Properties
Authors: Raouf Mbarki, Fadi Al Khatib, Malek Adouni
Abstract:
Knee collateral ligaments play a significant role in restraining excessive frontal motion (varus/valgus rotations). In this investigation, a multiscale frame was developed based on structural hierarchies of the collateral ligaments starting from the bottom (tropocollagen molecule) to up where the fibred reinforced structure established. Experimental data of failure tensile test were considered as the principal driver of the developed model. This model was calibrated statistically using Bayesian calibration due to the high number of unknown parameters. Then the model is scaled up to fit the real structure of the collateral ligaments and simulated under realistic boundary conditions. Predications have been successful in describing the observed transient response of the collateral ligaments during tensile test under pre- and post-damage loading conditions. Collateral ligaments maximum stresses and strengths were observed near to the femoral insertions, a results that is in good agreement with experimental investigations. Also for the first time, damage initiation and propagation were documented with this model as a function of the cross-link density between tropocollagen molecules.Keywords: multiscale model, tropocollagen, fibrils, ligaments commas
Procedia PDF Downloads 1598582 Analysis of the Impact of Foreign Direct Investment on the Integration of the Automotive Industry of Iran into Global Production Networks
Authors: Bahareh Mostofian
Abstract:
Foreign Direct Investment (FDI) has long been recognized as a crucial driver of economic growth and development in less-developed countries and their integration into Global Production Networks (GPNs). FDI not only brings capital from the core countries but also technology, innovation, and know-how knowledge that can upgrade the capabilities of host automotive industries. On the other hand, FDI can also have negative impacts on host countries if it leads to significant import dependency. In the case of the Iranian automotive sector, the industry greatly benefited from FDI, with Western carmakers dominating the market. Over time, various types of know-how knowledge, including joint ventures (JVs), trade licenses, and technical assistance, have been provided, helping Iran upgrade its automotive industry. While after the severe geopolitical obstacles imposed by both the EU and the U.S., the industry became over-reliant on the car and spare parts imports, and the lack of emphasis on knowledge transfer further affected the growth and development of the Iranian automotive sector. To address these challenges, current research has adopted a descriptive-analytical methodology to illustrate the gradual changes accrued with foreign suppliers through FDI. The research finding shows that after the two-phase imposed sanctions, the detrimental linkages created by overreliance on the car and spare parts imports without any industrial upgrading negatively affected the growth and development of the national and assembled products of the Iranian automotive sector.Keywords: less-developed country, FDI, GPNs, automotive industry, Iran
Procedia PDF Downloads 738581 Transfer Function Model-Based Predictive Control for Nuclear Core Power Control in PUSPATI TRIGA Reactor
Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha
Abstract:
The 1MWth PUSPATI TRIGA Reactor (RTP) in Malaysia Nuclear Agency has been operating more than 35 years. The existing core power control is using conventional controller known as Feedback Control Algorithm (FCA). It is technically challenging to keep the core power output always stable and operating within acceptable error bands for the safety demand of the RTP. Currently, the system could be considered unsatisfactory with power tracking performance, yet there is still significant room for improvement. Hence, a new design core power control is very important to improve the current performance in tracking and regulating reactor power by controlling the movement of control rods that suit the demand of highly sensitive of nuclear reactor power control. In this paper, the proposed Model Predictive Control (MPC) law was applied to control the core power. The model for core power control was based on mathematical models of the reactor core, MPC, and control rods selection algorithm. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The proposed MPC was presented in a transfer function model of the reactor core according to perturbations theory. The transfer function model-based predictive control (TFMPC) was developed to design the core power control with predictions based on a T-filter towards the real-time implementation of MPC on hardware. This paper introduces the sensitivity functions for TFMPC feedback loop to reduce the impact on the input actuation signal and demonstrates the behaviour of TFMPC in term of disturbance and noise rejections. The comparisons of both tracking and regulating performance between the conventional controller and TFMPC were made using MATLAB and analysed. In conclusion, the proposed TFMPC has satisfactory performance in tracking and regulating core power for controlling nuclear reactor with high reliability and safety.Keywords: core power control, model predictive control, PUSPATI TRIGA reactor, TFMPC
Procedia PDF Downloads 241