Search results for: type-2 fuzzy sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1900

Search results for: type-2 fuzzy sets

340 Unequal Contributions of Parental Isolates in Somatic Recombination of the Stripe Rust Fungus

Authors: Xianming Chen, Yu Lei, Meinan Wang

Abstract:

The dikaryotic basidiomycete fungus, Puccinia striiformis, causes stripe rust, one of the most important diseases of wheat and barley worldwide. The pathogen is largely reproduced asexually, and asexual recombination has been hypothesized to be one of the mechanisms for the pathogen variations. To test the hypothesis and understand the genetic process of asexual recombination, somatic recombinant isolates were obtained under controlled conditions by inoculating susceptible host plants with a mixture of equal quantity of urediniospores of isolates with different virulence patterns and selecting through a series of inoculation on host plants with different genes for resistance to one of the parental isolates. The potential recombinant isolates were phenotypically characterized by virulence testing on the set of 18 wheat lines used to differentiate races of the wheat stripe rust pathogen, P. striiformis f. sp. tritici (Pst), for the combinations of Pst isolates; or on both sets of the wheat differentials and 12 barley differentials for identifying races of the barley stripe rust pathogen, P. striiformis f. sp. hordei (Psh) for combinations of a Pst isolate and a Psh isolate. The progeny and parental isolates were also genotypically characterized with 51 simple sequence repeat and 90 single-nucleotide polymorphism markers. From nine combinations of parental isolates, 68 potential recombinant isolates were obtained, of which 33 (48.5%) had similar virulence patterns to one of the parental isolates, and 35 (51.5%) had virulence patterns distinct from either of the parental isolates. Of the 35 isolates of distinct virulence patterns, 11 were identified as races that had been previously detected from natural collections and 24 were identified as new races. The molecular marker data confirmed 66 of the 68 isolates as recombinants. The percentages of parental marker alleles ranged from 0.9% to 98.9% and were significantly different from equal proportions in the recombinant isolates. Except for a couple of combinations, the greater or less contribution was not specific to any particular parental isolates as the same parental isolates contributed more to some of the progeny isolates but less to the other progeny isolates in the same combination. The unequal contributions by parental isolates appear to be a general role in somatic recombination for the stripe rust fungus, which may be used to distinguish asexual recombination from sexual recombination in studying the evolutionary mechanisms of the highly variable fungal pathogen.

Keywords: molecular markers, Puccinia striiformis, somatic recombination, stripe rust

Procedia PDF Downloads 244
339 Identification of Candidate Congenital Heart Defects Biomarkers by Applying a Random Forest Approach on DNA Methylation Data

Authors: Kan Yu, Khui Hung Lee, Eben Afrifa-Yamoah, Jing Guo, Katrina Harrison, Jack Goldblatt, Nicholas Pachter, Jitian Xiao, Guicheng Brad Zhang

Abstract:

Background and Significance of the Study: Congenital Heart Defects (CHDs) are the most common malformation at birth and one of the leading causes of infant death. Although the exact etiology remains a significant challenge, epigenetic modifications, such as DNA methylation, are thought to contribute to the pathogenesis of congenital heart defects. At present, no existing DNA methylation biomarkers are used for early detection of CHDs. The existing CHD diagnostic techniques are time-consuming and costly and can only be used to diagnose CHDs after an infant was born. The present study employed a machine learning technique to analyse genome-wide methylation data in children with and without CHDs with the aim to find methylation biomarkers for CHDs. Methods: The Illumina Human Methylation EPIC BeadChip was used to screen the genome‐wide DNA methylation profiles of 24 infants diagnosed with congenital heart defects and 24 healthy infants without congenital heart defects. Primary pre-processing was conducted by using RnBeads and limma packages. The methylation levels of top 600 genes with the lowest p-value were selected and further investigated by using a random forest approach. ROC curves were used to analyse the sensitivity and specificity of each biomarker in both training and test sample sets. The functionalities of selected genes with high sensitivity and specificity were then assessed in molecular processes. Major Findings of the Study: Three genes (MIR663, FGF3, and FAM64A) were identified from both training and validating data by random forests with an average sensitivity and specificity of 85% and 95%. GO analyses for the top 600 genes showed that these putative differentially methylated genes were primarily associated with regulation of lipid metabolic process, protein-containing complex localization, and Notch signalling pathway. The present findings highlight that aberrant DNA methylation may play a significant role in the pathogenesis of congenital heart defects.

Keywords: biomarker, congenital heart defects, DNA methylation, random forest

Procedia PDF Downloads 159
338 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method

Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David

Abstract:

Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.

Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height

Procedia PDF Downloads 213
337 An Overview of Posterior Fossa Associated Pathologies and Segmentation

Authors: Samuel J. Ahmad, Michael Zhu, Andrew J. Kobets

Abstract:

Segmentation tools continue to advance, evolving from manual methods to automated contouring technologies utilizing convolutional neural networks. These techniques have evaluated ventricular and hemorrhagic volumes in the past but may be applied in novel ways to assess posterior fossa-associated pathologies such as Chiari malformations. Herein, we summarize literature pertaining to segmentation in the context of this and other posterior fossa-based diseases such as trigeminal neuralgia, hemifacial spasm, and posterior fossa syndrome. A literature search for volumetric analysis of the posterior fossa identified 27 papers where semi-automated, automated, manual segmentation, linear measurement-based formulas, and the Cavalieri estimator were utilized. These studies produced superior data than older methods utilizing formulas for rough volumetric estimations. The most commonly used segmentation technique was semi-automated segmentation (12 studies). Manual segmentation was the second most common technique (7 studies). Automated segmentation techniques (4 studies) and the Cavalieri estimator (3 studies), a point-counting method that uses a grid of points to estimate the volume of a region, were the next most commonly used techniques. The least commonly utilized segmentation technique was linear measurement-based formulas (1 study). Semi-automated segmentation produced accurate, reproducible results. However, it is apparent that there does not exist a single semi-automated software, open source or otherwise, that has been widely applied to the posterior fossa. Fully-automated segmentation via such open source software as FSL and Freesurfer produced highly accurate posterior fossa segmentations. Various forms of segmentation have been used to assess posterior fossa pathologies and each has its advantages and disadvantages. According to our results, semi-automated segmentation is the predominant method. However, atlas-based automated segmentation is an extremely promising method that produces accurate results. Future evolution of segmentation technologies will undoubtedly yield superior results, which may be applied to posterior fossa related pathologies. Medical professionals will save time and effort analyzing large sets of data due to these advances.

Keywords: chiari, posterior fossa, segmentation, volumetric

Procedia PDF Downloads 107
336 The Methods of Customer Satisfaction Measurement and Its Statistical Analysis towards Sales and Logistic Activities in Food Sector

Authors: Seher Arslankaya, Bahar Uludağ

Abstract:

Meeting the needs and demands of customers and pleasing the customers are important requirements for companies in food sectors where the growth of competition is significantly unpredictable. Customer satisfaction is also one of the key concepts which is mainly driven by wide range of customer preference and expectation upon products and services introduced and delivered to them. In order to meet the customer demands, the companies that engage in food sectors are expected to have a well-managed set of Total Quality Management (TQM), which sets out to improve quality of products and services; to reduce costs and to increase customer satisfaction by restructuring traditional management practices. It aims to increase customer satisfaction by meeting (their) customer expectations and requirements. The achievement would be determined with the help of customer satisfaction surveys, which is done to obtain immediate feedback and to provide quick responses. In addition, the surveys would also assist the making of strategic planning which helps to anticipate customer future needs and expectations. Meanwhile, periodic measurement of customer satisfaction would be a must because with the better understanding of customers perceptions from the surveys (done by questioners), the companies would have a clear idea to identify their own strengths and weaknesses that help the companies keep their loyal customers; to stand in comparison toward their competitors and map out their future progress and improvement. In this study, we propose a survey based on customer satisfaction measurement method and its statistical analysis for sales and logistic activities of food firms. Customer satisfaction would be discussed in details. Furthermore, after analysing the data derived from the questionnaire that applied to customers by using the SPSS software, various results obtained from the application would be presented. By also applying ANOVA test, the study would analysis the existence of meaningful differences between customer demographic proportion and their perceptions. The purpose of this study is also to find out requirements which help to remove the effects that decrease customer satisfaction and produce loyal customers in food industry. For this purpose, the customer complaints are collected. Additionally, comments and suggestions are done according to the obtained results of surveys, which would be useful for the making-process of strategic planning in food industry.

Keywords: customer satisfaction measurement and analysis, food industry, SPSS, TQM

Procedia PDF Downloads 251
335 Determining Face-Validity for a Set of Preventable Drug-Related Morbidity Indicators Developed for Primary Healthcare in South Africa

Authors: D. Velayadum, P. Sthandiwe , N. Maharaj, T. Munien, S. Ndamase, G. Zulu, S. Xulu, F. Oosthuizen

Abstract:

Introduction and aims of the study: It is the responsibility of the pharmacist to manage drug-related problems in order to ensure the greatest benefit to the patient. In order to prevent drug-related morbidity, pharmacists should be aware of medicines that may contribute to certain drug-related problems due to their pharmacological action. In an attempt to assist healthcare practitioners to prevent drug-related morbidity (PDRM), indicators for prevention have been designed. There are currently no indicators available for primary health care in developing countries like South Africa, where the majority of the population access primary health care. There is, therefore, a need to develop such indicators, specifically with the aim of assisting healthcare practitioners in primary health care. Methods: A literature study was conducted to compile a comprehensive list of PDRM indicators as developed internationally using the search engines Google Scholar and PubMed. MESH term used to retrieve suitable articles was 'preventable drug-related morbidity indicators'. The comprehensive list of PDRM indicators obtained from the literature study was further evaluated for face validity. Face validity was done in duplicate by 2 sets of independent researchers to ensure 1) no duplication of indicators when compiling a single list, 2) inclusion of only medication available in primary healthcare, and 3) inclusion of medication currently available in South Africa. Results: The list of indicators, compiled from PDRM indicators in the USA, UK, Portugal, Australia, India, and Canada contained 324 PDRM. 184 of these indicators were found to be duplicates, and the duplications were omitted, leaving a final list of 140. The 140 PDRM indicators were evaluated for face-validity, and 97 were accepted as relevant to primary health care in South Africa. 43 indicators did not comply with the criteria and were omitted from the final list. Conclusion: This study is a first step in compiling a list of PDRM indicators for South Africa. It is important to take cognizance to the fact the health systems differ vastly internationally, and it is, therefore, important to develop country-specific indicators.

Keywords: drug-related morbidity, primary healthcare, South Africa, developing countries

Procedia PDF Downloads 147
334 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups

Authors: Lily Ingsrisawang, Tasanee Nacharoen

Abstract:

Introduction: The problems of unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many research papers found that the performance of existing classifier tends to be biased towards the majority class. The k -nearest neighbors’ nonparametric discriminant analysis is one method that was proposed for classifying unbalanced classes with good performance. Hence, the methods of discriminant analysis are of interest to us in investigating misclassification error rates for class-imbalanced data of three diabetes risk groups. Objective: The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification application of class-imbalanced data of diabetes risk groups. Methods: Data from a healthy project for 599 staffs in a government hospital in Bangkok were obtained for the classification problem. The staffs were diagnosed into one of three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data along with the variables; diabetes risk group, age, gender, cholesterol, and BMI was analyzed and bootstrapped up to 50 and 100 samples, 599 observations per sample, for additional estimation of misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples show non-normality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. In finding the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions with three choices of (0.90:0.05:0.05), (0.80: 0.10: 0.10) or (0.70, 0.15, 0.15). Results: The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k = 3 or k = 4 and the prior probabilities of {non-risk:risk:diabetic} as {0.90:0.05:0.05} or {0.80:0.10:0.10} gave the smallest error rate of misclassification. Conclusion: The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.

Keywords: error rate, bootstrap, diabetes risk groups, k-nearest neighbors

Procedia PDF Downloads 436
333 Presence and Absence: The Use of Photographs in Paris, Texas

Authors: Yi-Ting Wang, Wen-Shu Lai

Abstract:

The subject of this paper is the photography in the 1983 film Paris, Texas, directed by Wim Wenders. Wenders is well known as a film director as well as a photographer. We have found that photography is shown as a photographic element in many of his films. Some of these photographs serve as details within the films, while others play important roles that are relevant to the story. This paper aims to consider photographs in film as a specific type of text, which is the output of both still photography and the film itself. In the film Paris, Texas, three sets of important photographs appear whose symbolic meanings are as dialectical as their text types. The relationship between the existence of these photos and the storyline is both dependent and isolated. The film’s images fly by and progress into other images, while the photos in the film serve a unique narrative function by stopping the continuously flowing images thus provide the viewer a space for imagination and contemplation. They are more than just artistic forms; they also contained multiple meanings. The photographs in Paris, Texas play the role of both presence and absence according to their shifting meanings. There are references to their presence: photographs exist between film time and narrative time, so in terms of the interaction between the characters in the film, photographs are a common symbol of the beginning and end of the characters’ journeys. In terms of the audience, the film’s photographs are a link in the viewing frame structure, through which the creative motivation of the film director can be explored. Photographs also point to the absence of certain objects: the scenes in the photos represent an imaginary map of emotion. The town of Paris, Texas is therefore isolated from the physical presence of the photograph, and is far more abstract than the reality in the film. This paper embraces the ambiguous nature of photography and demonstrates its presence and absence in film with regard to the meaning of text. However, it is worth reflecting that the temporary nature of the interpretation of the film’s photographs is far greater than any other type of photographic text: the characteristics of the text cause the interpretation results to change along with the variations in the interpretation process, which makes their meaning a dynamic process. The photographs’ presence or absence in the context of Paris, Texas also demonstrates the presence and absence of the creator, time, the truth, and the imagination. The film becomes more complete as a result of the revelation of the photographs, while the intertextual connection between these two forms simultaneously provides multiple possibilities for the interpretation of the photographs in the film.

Keywords: film, Paris, Texas, photography, Wim Wenders

Procedia PDF Downloads 320
332 The Effect of a Theoretical and Practical Training Program on Student Teachers’ Acquisition of Objectivity in Self-Assessments

Authors: Zilungile Sosibo

Abstract:

Constructivism in teacher education is growing tremendously in both the developed and developing world. Proponents of constructivism emphasize active engagement of students in the teaching and learning process. In an effort to keep students engaged while they learn to learn, teachers use a variety of methods to incorporate constructivism in the teaching-learning situations. One area that has a potential for realizing constructivism in the classroom is self-assessment. Sadly, students are rarely involved in the assessment of their work. Instead, the most knowing teacher dominates this process. Student involvement in self-assessments has a potential to teach student teachers to become objective assessors of their students’ work by the time they become credentialed. This is important, as objectivity in assessments is a much-needed skill in the classroom contexts within which teachers deal with students from diverse backgrounds and in which biased assessments should be avoided at all cost. The purpose of the study presented in this paper was to investigate whether student teachers acquired the skills of administering self-assessments objectively after they had been immersed in a formal training program and participated in four sets of self-assessments. The objectives were to determine the extent to which they had mastered the skills of objective self-assessments, their growth and development in this area, and the challenges they encountered in administering self-assessments objectively. The research question was: To what extent did student teachers acquire objectivity in self-assessments after their theoretical and practical engagement in this activity? Data were collected from student teachers through participant observation and semi-structured interviews. The design was a qualitative case study. The sample consisted of 39 final-year student teachers enrolled in a Bachelor of Education teacher education program at a university in South Africa. Results revealed that the formal training program and participation in self-assessments had a minimal effect on students’ acquisition of objectivity in self-assessments, due to the factors associated with self-aggrandizement and hegemony, the latter resulting from gender, religious and racial differences. These results have serious implications for the need to incorporate self-assessments in the teacher-education curriculum, as well as for extended formal training programs for student teachers on assessment in general.

Keywords: objectivity, self-assessment, student teachers, teacher education curriculum

Procedia PDF Downloads 275
331 Crime Prevention with Artificial Intelligence

Authors: Mehrnoosh Abouzari, Shahrokh Sahraei

Abstract:

Today, with the increase in quantity and quality and variety of crimes, the discussion of crime prevention has faced a serious challenge that human resources alone and with traditional methods will not be effective. One of the developments in the modern world is the presence of artificial intelligence in various fields, including criminal law. In fact, the use of artificial intelligence in criminal investigations and fighting crime is a necessity in today's world. The use of artificial intelligence is far beyond and even separate from other technologies in the struggle against crime. Second, its application in criminal science is different from the discussion of prevention and it comes to the prediction of crime. Crime prevention in terms of the three factors of the offender, the offender and the victim, following a change in the conditions of the three factors, based on the perception of the criminal being wise, and therefore increasing the cost and risk of crime for him in order to desist from delinquency or to make the victim aware of self-care and possibility of exposing him to danger or making it difficult to commit crimes. While the presence of artificial intelligence in the field of combating crime and social damage and dangers, like an all-seeing eye, regardless of time and place, it sees the future and predicts the occurrence of a possible crime, thus prevent the occurrence of crimes. The purpose of this article is to collect and analyze the studies conducted on the use of artificial intelligence in predicting and preventing crime. How capable is this technology in predicting crime and preventing it? The results have shown that the artificial intelligence technologies in use are capable of predicting and preventing crime and can find patterns in the data set. find large ones in a much more efficient way than humans. In crime prediction and prevention, the term artificial intelligence can be used to refer to the increasing use of technologies that apply algorithms to large sets of data to assist or replace police. The use of artificial intelligence in our debate is in predicting and preventing crime, including predicting the time and place of future criminal activities, effective identification of patterns and accurate prediction of future behavior through data mining, machine learning and deep learning, and data analysis, and also the use of neural networks. Because the knowledge of criminologists can provide insight into risk factors for criminal behavior, among other issues, computer scientists can match this knowledge with the datasets that artificial intelligence uses to inform them.

Keywords: artificial intelligence, criminology, crime, prevention, prediction

Procedia PDF Downloads 77
330 Testing the Life Cycle Theory on the Capital Structure Dynamics of Trade-Off and Pecking Order Theories: A Case of Retail, Industrial and Mining Sectors

Authors: Freddy Munzhelele

Abstract:

Setting: the empirical research has shown that the life cycle theory has an impact on the firms’ financing decisions, particularly the dividend pay-outs. Accordingly, the life cycle theory posits that as a firm matures, it gets to a level and capacity where it distributes more cash as dividends. On the other hand, the young firms prioritise investment opportunities sets and their financing; thus, they pay little or no dividends. The research on firms’ financing decisions also demonstrated, among others, the adoption of trade-off and pecking order theories on the dynamics of firms capital structure. The trade-off theory talks to firms holding a favourable position regarding debt structures particularly as to the cost and benefits thereof; and pecking order is concerned with firms preferring a hierarchical order as to choosing financing sources. The case of life cycle hypothesis explaining the financial managers’ decisions as regards the firms’ capital structure dynamics appears to be an interesting link, yet this link has been neglected in corporate finance research. If this link is to be explored as an empirical research, the financial decision-making alternatives will be enhanced immensely, since no conclusive evidence has been found yet as to the dynamics of capital structure. Aim: the aim of this study is to examine the impact of life cycle theory on the capital structure dynamics trade-off and pecking order theories of firms listed in retail, industrial and mining sectors of the JSE. These sectors are among the key contributors to the GDP in the South African economy. Design and methodology: following the postpositivist research paradigm, the study is quantitative in nature and utilises secondary data obtainable from the financial statements of sampled firm for the period 2010 – 2022. The firms’ financial statements will be extracted from the IRESS database. Since the data will be in panel form, a combination of the static and dynamic panel data estimators will used to analyse data. The overall data analyses will be done using STATA program. Value add: this study directly investigates the link between the life cycle theory and the dynamics of capital structure decisions, particularly the trade-off and pecking order theories.

Keywords: life cycle theory, trade-off theory, pecking order theory, capital structure, JSE listed firms

Procedia PDF Downloads 62
329 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 80
328 Osteoporosis and Weight Gain – Two Major Concerns for Menopausal Women - a Physiotherapy Perspective

Authors: Renu Pattanshetty

Abstract:

The aim of this narrative review is to highlight the impact of menopause on osteoporosis and weight gain. The review also aims to summarize physiotherapeutic strategies to combat the same.A thorough literature search was conducted using electronic databases like MEDline, PUBmed, Highwire Press, PUBmed Central for English language studies that included search terms like menopause, osteoporosis, obesity, weight gain, exercises, physical activity, physiotherapy strategies from the year 2000 till date. Out of 157 studies that included metanalyses, critical reviews and randomized clinical trials, a total of 84 were selected that met the inclusion criteria. Prevalence of obesity is increasing world - wide and is reaching epidemic proportions even in the menopausal women. Prevalence of abdominal obesity is almost double than that general obesity with rates in the US with 65.5% in women ages 40-59 years and 73.8 in women aged 60 years or more. Physical activities and exercises play a vital role in prevention and treatment of osteoporosis and weight gain related to menopause that aim to boost the general well-being and any symptoms brought about by natural body changes. Endurance exercises lasting about 30 minutes /day for 5 days/ week has shown to decrease weight and prevent weight gain. In addition, strength training with at least 8 exercises of 8-12 repetitions working for whole body and for large muscle groups has shown to result positive outcomes. Hot flashes can be combatted through yogic breathing and relaxation exercises. Prevention of fall strategies and resistance training are key to treat diagnosed cases of osteoporosis related to menopause. One to three sets with five to eight repetitions of four to six weight bearing exercises have shown positive results. Menopause marks an important time for women to evaluate their risk of obesity and osteoporosis. It is known fact that bone benefit from exercises are lost when training is stopped, hence, practicing bone smart habits and strict adherence to recommended physical activity programs are recommended which are enjoyable, safe and effective.

Keywords: menopause, osteoporosis, obesity, weight gain, exercises, physical activity, physiotherapy strategies

Procedia PDF Downloads 306
327 Influential Parameters in Estimating Soil Properties from Cone Penetrating Test: An Artificial Neural Network Study

Authors: Ahmed G. Mahgoub, Dahlia H. Hafez, Mostafa A. Abu Kiefa

Abstract:

The Cone Penetration Test (CPT) is a common in-situ test which generally investigates a much greater volume of soil more quickly than possible from sampling and laboratory tests. Therefore, it has the potential to realize both cost savings and assessment of soil properties rapidly and continuously. The principle objective of this paper is to demonstrate the feasibility and efficiency of using artificial neural networks (ANNs) to predict the soil angle of internal friction (Φ) and the soil modulus of elasticity (E) from CPT results considering the uncertainties and non-linearities of the soil. In addition, ANNs are used to study the influence of different parameters and recommend which parameters should be included as input parameters to improve the prediction. Neural networks discover relationships in the input data sets through the iterative presentation of the data and intrinsic mapping characteristics of neural topologies. General Regression Neural Network (GRNN) is one of the powerful neural network architectures which is utilized in this study. A large amount of field and experimental data including CPT results, plate load tests, direct shear box, grain size distribution and calculated data of overburden pressure was obtained from a large project in the United Arab Emirates. This data was used for the training and the validation of the neural network. A comparison was made between the obtained results from the ANN's approach, and some common traditional correlations that predict Φ and E from CPT results with respect to the actual results of the collected data. The results show that the ANN is a very powerful tool. Very good agreement was obtained between estimated results from ANN and actual measured results with comparison to other correlations available in the literature. The study recommends some easily available parameters that should be included in the estimation of the soil properties to improve the prediction models. It is shown that the use of friction ration in the estimation of Φ and the use of fines content in the estimation of E considerable improve the prediction models.

Keywords: angle of internal friction, cone penetrating test, general regression neural network, soil modulus of elasticity

Procedia PDF Downloads 416
326 Content-Aware Image Augmentation for Medical Imaging Applications

Authors: Filip Rusak, Yulia Arzhaeva, Dadong Wang

Abstract:

Machine learning based Computer-Aided Diagnosis (CAD) is gaining much popularity in medical imaging and diagnostic radiology. However, it requires a large amount of high quality and labeled training image datasets. The training images may come from different sources and be acquired from different radiography machines produced by different manufacturers, digital or digitized copies of film radiographs, with various sizes as well as different pixel intensity distributions. In this paper, a content-aware image augmentation method is presented to deal with these variations. The results of the proposed method have been validated graphically by plotting the removed and added seams of pixels on original images. Two different chest X-ray (CXR) datasets are used in the experiments. The CXRs in the datasets defer in size, some are digital CXRs while the others are digitized from analog CXR films. With the proposed content-aware augmentation method, the Seam Carving algorithm is employed to resize CXRs and the corresponding labels in the form of image masks, followed by histogram matching used to normalize the pixel intensities of digital radiography, based on the pixel intensity values of digitized radiographs. We implemented the algorithms, resized the well-known Montgomery dataset, to the size of the most frequently used Japanese Society of Radiological Technology (JSRT) dataset and normalized our digital CXRs for testing. This work resulted in the unified off-the-shelf CXR dataset composed of radiographs included in both, Montgomery and JSRT datasets. The experimental results show that even though the amount of augmentation is large, our algorithm can preserve the important information in lung fields, local structures, and global visual effect adequately. The proposed method can be used to augment training and testing image data sets so that the trained machine learning model can be used to process CXRs from various sources, and it can be potentially used broadly in any medical imaging applications.

Keywords: computer-aided diagnosis, image augmentation, lung segmentation, medical imaging, seam carving

Procedia PDF Downloads 224
325 Approaches to Valuing Ecosystem Services in Agroecosystems From the Perspectives of Ecological Economics and Agroecology

Authors: Sandra Cecilia Bautista-Rodríguez, Vladimir Melgarejo

Abstract:

Climate change, loss of ecosystems, increasing poverty, increasing marginalization of rural communities and declining food security are global issues that require urgent attention. In this regard, a great deal of research has focused on how agroecosystems respond to these challenges as they provide ecosystem services (ES) that lead to higher levels of resilience, adaptation, productivity and self-sufficiency. Hence, the valuing of ecosystem services plays an important role in the decision-making process for the design and management of agroecosystems. This paper aims to define the link between ecosystem service valuation methods and ES value dimensions in agroecosystems from ecological economics and agroecology. The method used to identify valuation methodologies was a literature review in the fields of Agroecology and Ecological Economics, based on a strategy of information search and classification. The conceptual framework of the work is based on the multidimensionality of value, considering the social, ecological, political, technological and economic dimensions. Likewise, the valuation process requires consideration of the ecosystem function associated with ES, such as regulation, habitat, production and information functions. In this way, valuation methods for ES in agroecosystems can integrate more than one value dimension and at least one ecosystem function. The results allow correlating the ecosystem functions with the ecosystem services valued, and the specific tools or models used, the dimensions and valuation methods. The main methodologies identified are multi-criteria valuation (1), deliberative - consultative valuation (2), valuation based on system dynamics modeling (3), valuation through energy or biophysical balances (4), valuation through fuzzy logic modeling (5), valuation based on agent-based modeling (6). Amongst the main conclusions, it is highlighted that the system dynamics modeling approach has a high potential for development in valuation processes, due to its ability to integrate other methods, especially multi-criteria valuation and energy and biophysical balances, to describe through causal cycles the interrelationships between ecosystem services, the dimensions of value in agroecosystems, thus showing the relationships between the value of ecosystem services and the welfare of communities. As for methodological challenges, it is relevant to achieve the integration of tools and models provided by different methods, to incorporate the characteristics of a complex system such as the agroecosystem, which allows reducing the limitations in the processes of valuation of ES.

Keywords: ecological economics, agroecosystems, ecosystem services, valuation of ecosystem services

Procedia PDF Downloads 125
324 Utilising Indigenous Knowledge to Design Dykes in Malawi

Authors: Martin Kleynhans, Margot Soler, Gavin Quibell

Abstract:

Malawi is one of the world’s poorest nations and consequently, the design of flood risk management infrastructure comes with a different set of challenges. There is a lack of good quality hydromet data, both in spatial terms and in the quality thereof and the challenge in the design of flood risk management infrastructure is compounded by the fact that maintenance is almost completely non-existent and that solutions have to be simple to be effective. Solutions should not require any further resources to remain functional after completion, and they should be resilient. They also have to be cost effective. The Lower Shire Valley of Malawi suffers from frequent flood events. Various flood risk management interventions have been designed across the valley during the course of the Shire River Basin Management Project – Phase I, and due to the data poor environment, indigenous knowledge was relied upon to a great extent for hydrological and hydraulic model calibration and verification. However, indigenous knowledge comes with the caveat that it is ‘fuzzy’ and that it can be manipulated for political reasons. The experience in the Lower Shire valley suggests that indigenous knowledge is unlikely to invent a problem where none exists, but that flood depths and extents may be exaggerated to secure prioritization of the intervention. Indigenous knowledge relies on the memory of a community and cannot foresee events that exceed past experience, that could occur differently to those that have occurred in the past, or where flood management interventions change the flow regime. This complicates communication of planned interventions to local inhabitants. Indigenous knowledge is, for the most part, intuitive, but flooding can sometimes be counter intuitive, and the rural poor may have a lower trust of technology. Due to a near complete lack of maintenance of infrastructure, infrastructure has to be designed with no moving parts and no requirement for energy inputs. This precludes pumps, valves, flap gates and sophisticated warning systems. Designs of dykes during this project included ‘flood warning spillways’, that double up as pedestrian and animal crossing points, which provide warning of impending dangerous water levels behind dykes to residents before water levels that could cause a possible dyke failure are reached. Locally available materials and erosion protection using vegetation were used wherever possible to keep costs down.

Keywords: design of dykes in low-income countries, flood warning spillways, indigenous knowledge, Malawi

Procedia PDF Downloads 284
323 Co-Participation: Towards the Sustainable Micro-Rural Complex in China

Authors: Danhua Xu, Zhenlan Qian, Zhu Wang, Jiayan Fu, Ling Wang

Abstract:

A new business mode called rural complex is proposed by the China’s government to promote the development the economy in the rural area. However, for the sake of current national conditions including the great number of labor farmers owning the small scale farmlands and the uncertain enthusiasm from the enterprises, it is challenging to develop the big scale rural complex. To react to the dilemmas, this paper puts forward the micro-rural complex to boost the small scale farms by co-participation from a bottom-up mode. By analyzing the potential opportunities to find the suitable mode, exploring the interdisciplinary and interdepartmental co-participation way beyond architecture design and spatial planning between different actors, the paper tries to find a complete process towards the sustainable micro-rural complex and conducts an ongoing practice to optimize it, to bring new insights and reference to the rural development. According to the transformation of the economy, the micro-rural complex will develop into two phases, both of which can be discussed in three parts, the economic mode, the spatial support, and the Cooperating mechanism. The first stage is the agriculture co-participation based on the rise of Community supported agriculture (CSA) in which the consumers buy the products planted in an organic way from the farmers directly with a higher price to support the small-scale agriculture and overcome the food safety issues. The following stage sets up the agritourism catering the citizens with the restaurants, inns and other tourist service facilities to be planned and designed. In the whole process, the interdisciplinary co-participation will play an important role to provide the guidelines and consultation from the agronomists, architects and rural planners to the farmers. This mode has been applied to an on-going farm project, from which to explore the mode in a more practical way. In conclusion, the micro-rural complex aims at creating a balanced urban-rural relationship by co-participation taking advantage of the different actors. The spatial development is considered from the economic mode and social organization. The integration of the mode based on the small-scale agriculture will contribute to a sustainable growth and realize the long run development in the rural area.

Keywords: micro-rural complex, co-participation, sustainable development, China

Procedia PDF Downloads 264
322 Development of a PJWF Cleaning Method for Wet Electrostatic Precipitators

Authors: Hsueh-Hsing Lu, Thi-Cuc Le, Tung-Sheng Tsai, Chuen-Jinn Tsai

Abstract:

This study designed and tested a novel wet electrostatic precipitators (WEP) system featuring a Pulse-Air-Jet-Assisted Water Flow (PJWF) to shorten water cleaning time, reduce water usage, and maintain high particle removal efficiency. The PJWF injected cleaning water tangentially at the cylinder wall, rapidly enhancing the momentum of the water flow for efficient dust cake removal. Each PJWF cycle uses approximately 4.8 liters of cleaning water in 18 seconds. Comprehensive laboratory tests were conducted using a single-tube WEP prototype within a flow rate range of 3.0 to 6.0 cubic meters per minute(CMM), operating voltages between -35 to -55 kV, and high-frequency power supply. The prototype, consisting of 72 sets of double-spike rigid discharge electrodes, demonstrated that with the PJWF, -35 kV, and 3.0 CMM, the PM2.5 collection efficiency remained as high as the initial value of 88.02±0.92% after loading with Al2O3 particles at 35.75± 2.54 mg/Nm3 for 20-hr continuous operation. In contrast, without the PJWF, the PM2.5 collection efficiency drastically dropped from 87.4% to 53.5%. Theoretical modeling closely matched experimental results, confirming the robustness of the system's design and its scalability for larger industrial applications. Future research will focus on optimizing the PJWF system, exploring its performance with various particulate matter, and ensuring long-term operational stability and reliability under diverse environmental conditions. Recently, this WEP was combined with a preceding CT (cooling tower) and a HWS (honeycomb wet scrubber) and pilot-tested (40 CMM) to remove SO2 and PM2.5 emissions in a sintering plant of an integrated steel making plant. Pilot-test results showed that the removal efficiencies for SO2 and PM2.5 emissions are as high as 99.7 and 99.3 %, respectively, with ultralow emitted concentrations of 0.3 ppm and 0.07 mg/m3, respectively, while the white smoke is also eliminated at the same time. These new technologies are being used in the industry and the application in different fields is expected to be expanded to reduce air pollutant emissions substantially for a better ambient air quality.

Keywords: wet electrostatic precipitator, pulse-air-jet-assisted water flow, particle removal efficiency, air pollution control

Procedia PDF Downloads 24
321 An Exploratory Study of Entrepreneurial Satisfaction among Older Founders

Authors: Catarina Seco Matos, Miguel Amaral

Abstract:

The developed world is facing falling birth rates and rising life expectancies. As a result, the overall demographic structure of societies is becoming markedly older. This leads to an economic and political pressure towards the extension of individuals’ working lives. On the other hand, evidence shows that some older workers choose to stay in the labour force as employees, whereas others choose to pursue a more entrepreneurial occupational path. Thus, entrepreneurship or self-employment may be an option for socioeconomic participation of older individuals. Previous research on senior entrepreneurship is scarce and it focuses mainly on entrepreneurship determinants and individuals’ intentions. The fact that entrepreneurship is perceived as a voluntary or involuntary decision or as a positive or a negative outcome by older individuals is, to the best of our knowledge, still unexplored in the literature. In order to analyse the determinants of entrepreneurial satisfaction among older individuals, primary data were obtained from a unique questionnaire survey, which was sent to Portuguese senior entrepreneurs who have launched their company aged 50 and over (N=181). Portugal is one of the countries in the world with the with the largest ageing population and with a high proportion of older individuals who remain active after their official retirement age – which makes it an extremely relevant case study on senior entrepreneurship. Findings suggest that non pecuniary factors (rather than pecuniary) are the main driver for entrepreneurship at older ages. Specifically, results show that the will to remain active is the main motivation of older individuals to become entrepreneurs. This is line with the activity and continuity theories. Furthermore, senior entrepreneurs tend to have had an active working life (using their professional experience as a proxy) and, thus, want to keep the same lifestyle at an older age (in line with theory of continuity). Finally, results show that even though older individuals’ companies may not show the best financial performance that does not seem to affect their satisfaction with the company and with entrepreneurship in general. The present study aims at exploring, discussing and bring new research on senior entrepreneurship to the fore, rather than assuming purely deductive approach; hence, further confirmatory analyses with larger sets from different countries of data are required.

Keywords: active ageing, entrepreneurship, older entrepreneur, Portugal, satisfaction, senior entrepreneur

Procedia PDF Downloads 238
320 Tectogenesis Around Kalaat Es Senan, Northwest of Tunisia: Structural, Geophysical and Gravimetric Study

Authors: Amira Rjiba, Mohamed Ghanmi, Tahar Aifa, Achref Boulares

Abstract:

This study, involving the interpretation of geological outcrops data (structures, and lithostratigraphiec colones) and subsurface structures (seismic and gravimetric data) help us to identify and precise (i) the lithology of the sedimentary formations between the Aptian and the recent formations, (ii) to differentiate the sedimentary formations it from the salt-bearing Triassic (iii) and to specify the major structures though the tectonics effects having affected the region during its geological evolution. By placing our study area placed in the context of Tunisia, located on the southern margin of the Tethys show us through tectonic traces and structural analysis conducted, that this area was submitted during the Triassic perio at an active rifting triggered extensional tectonic events and extensive respectively in the Cretaceous and Paleogene. Lithostratigraphic correlations between outcrops and seismic data sets on those of six oil wells conducted in the region have allowed us to better understand the structural complexity and the role of different tectonic faults having contributed to the current configuration, and marked by the current rifts. Indeed, three directions of NW-SE faults, NNW-SSE to NS and NE-SW to EW had a major role in the genesis of folds and open ditches collapse of NW-SE direction. These results were complemented by seismic reflection data to clarify the geometry of the southern and western areas of Kalaa Khasba ditch. The eight selected seismic lines for this study allowed to characterize the main structures, with isochronous maps, contour and isovitesse of Serdj horizon that presents the main reservoir in the region. The line L2, keyed by the well 6, helped highlight the NW-SE compression that has resulted in persistent discrepancies widely identifiable in its lithostratigraphic column. The gravity survey has confirmed the extension of most of the accidents deep subsurface whose activity seems to go far. Gravimetry also reinforced seismic interpretation confirming, at the L2 well, that both SW and NE flank of the moat are two opposite faults and trace the boundaries of NNW-SSE direction graben whose sedimentation of Mio-Pliocene age and Quaternary.

Keywords: graben, graben collapse, gravity, Kalat Es Senan, seismic, tectogenesis

Procedia PDF Downloads 369
319 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators

Authors: Guenther Schuh, Michael Riesener, Frederic Diels

Abstract:

Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.

Keywords: agile, highly iterative development, agile-indicator, product development

Procedia PDF Downloads 247
318 Proficiency Testing of English for Specific Academic Purpose: Using a Pilot Test in a Taiwanese University as an Example

Authors: Wenli Tsou, Jessica Wu

Abstract:

Courses of English for specific academic purposes (ESAP) have become popular for higher education in Taiwan; however, no standardized tests have been developed for evaluating learners’ English proficiency in individual designated fields. Assuming a learner’s proficiency in a specific academic area is built up with one’s general proficiency in English with specific knowledge and vocabulary in the content areas, an adequate ESAP proficiency test may be constructed by some selected test items related to the designated academic areas. In this study, through collaboration between a language testing institution and a university in Taiwan, three sets of ESAP tests, covering three disciplinary areas of business and the workplace, science and engineering, and health and medicine majors, were developed and administered to sophomore students (N=1704) who were enrolled in ESAP courses at a university in southern Taiwan. For this study, the courses were grouped into the above-mentioned three disciplines, and students took the specialized proficiency test based on the ESAP course they were taking. Because students were free to select which ESAP course to take, each course had both major and non-major students. Toward the end of the one-semester course, ending in January, 2015, each student took two tests, one of general English (General English Proficiency Test, or GEPT) and the other ESAP. Following each test, students filled out a survey, reporting their test taking experiences. After comparing students’ two test scores, it was found that business majors and health and medical students performed better in ESAP than the non-majors in the class, whereas science and engineering majors did about the same as their non-major counterparts. In addition, test takers with CERF B2 (upper intermediate) level or above performed well in both tests, while students who are below B2 did slightly better in ESAP. The findings suggest that students’ test performance have been enhanced by their specialist content and vocabulary knowledge. Furthermore, results of the survey show that the difficulty levels reported by students are consistent with their test performances. Based on the item analysis, the findings can be used to develop proficiency tests for specific disciplines and to identify ability indicators for college students in their designated fields.

Keywords: english for specific academic purposes (ESAP), general english proficiency test (GEPT), higher education, proficiency test

Procedia PDF Downloads 531
317 Cfd Simulation for Urban Environment for Evaluation of a Wind Energy Potential of a Building or a New Urban Planning

Authors: David Serero, Loic Couton, Jean-Denis Parisse, Robert Leroy

Abstract:

This paper presents an analysis method of airflow at the periphery of several typologies of architectural volumes. To understand the complexity of the urban environment on the airflows in the city, we compared three sites at different architectural scale. The research sets a method to identify the optimal location for the installation of wind turbines on the edges of a building and to achieve an improvement in the performance of energy extracted by precise localization of an accelerating wing called “aero foil”. The objective is to define principles for the installation of wind turbines and natural ventilation design of buildings. Instead of theoretical winds analysis, we combined numerical aeraulic simulations using STAR CCM + software with wind data, over long periods of time (greater than 1 year). If airflows computer fluid analysis (CFD) simulation of buildings are current, we have calibrated a virtual wind tunnel with wind data using in situ anemometers (to establish localized cartography of urban winds). We can then develop a complete volumetric model of the behavior of the wind on a roof area, or an entire urban island. With this method, we can categorize: - the different types of wind in urban areas and identify the minimum and maximum wind spectrum, - select the type of harvesting devices - fixing to the roof of a building, - the altimetry of the device in relation to the levels of the roofs - The potential nuisances around. This study is carried out from the recovery of a geolocated data flow, and the connection of this information with the technical specifications of wind turbines, their energy performance and their speed of engagement. Thanks to this method, we can thus define the characteristics of wind turbines to maximize their performance in urban sites and in a turbulent airflow regime. We also study the installation of a wind accelerator associated with buildings. The “aerofoils which are integrated are improvement to control the speed of the air, to orientate it on the wind turbine, to accelerate it and to hide, thanks to its profile, the device on the roof of the building.

Keywords: wind energy harvesting, wind turbine selection, urban wind potential analysis, CFD simulation for architectural design

Procedia PDF Downloads 151
316 Performance Demonstration of Extendable NSPO Space-Borne GPS Receiver

Authors: Hung-Yuan Chang, Wen-Lung Chiang, Kuo-Liang Wu, Chen-Tsung Lin

Abstract:

National Space Organization (NSPO) has completed in 2014 the development of a space-borne GPS receiver, including design, manufacture, comprehensive functional test, environmental qualification test and so on. The main performance of this receiver include 8-meter positioning accuracy, 0.05 m/sec speed-accuracy, the longest 90 seconds of cold start time, and up to 15g high dynamic scenario. The receiver will be integrated in the autonomous FORMOSAT-7 NSPO-Built satellite scheduled to be launched in 2019 to execute pre-defined scientific missions. The flight model of this receiver manufactured in early 2015 will pass comprehensive functional tests and environmental acceptance tests, etc., which are expected to be completed by the end of 2015. The space-borne GPS receiver is a pure software design in which all GPS baseband signal processing are executed by a digital signal processor (DSP), currently only 50% of its throughput being used. In response to the booming global navigation satellite systems, NSPO will gradually expand this receiver to become a multi-mode, multi-band, high-precision navigation receiver, and even a science payload, such as the reflectometry receiver of a global navigation satellite system. The fundamental purpose of this extension study is to port some software algorithms such as signal acquisition and correlation, reused code and large amount of computation load to the FPGA whose processor is responsible for operational control, navigation solution, and orbit propagation and so on. Due to the development and evolution of the FPGA is pretty fast, the new system architecture upgraded via an FPGA should be able to achieve the goal of being a multi-mode, multi-band high-precision navigation receiver, or scientific receiver. Finally, the results of tests show that the new system architecture not only retains the original overall performance, but also sets aside more resources available for future expansion possibility. This paper will explain the detailed DSP/FPGA architecture, development, test results, and the goals of next development stage of this receiver.

Keywords: space-borne, GPS receiver, DSP, FPGA, multi-mode multi-band

Procedia PDF Downloads 370
315 Classroom Curriculum That Includes Wisdom Skills

Authors: Brian Fleischli, Shani Robins

Abstract:

In recent years, the implementation of wisdom skills, including emotional intelligence, mindfulness, empathy, compassion, gratitude, realism (Cognitive-Behavioral Therapy), and humility, within K-12 educational settings has demonstrated significant benefits in reducing stress, anxiety, anger, and conflict among students. This study summarizes the findings of research conducted over several years, showcasing the positive outcomes associated with teaching these skills to elementary and high school students. Additionally, this overview includes an updated synthesis of current literature concerning the application and effectiveness of training these skill sets in K-12 schools. The research outcomes highlight substantial improvements in student well-being and behavior. Demonstrated with treatment group students exhibiting notable reductions in anger, anxiety, depression, and disruptive behaviors compared to control groups. For instance, fourth-grade students showed enhanced empathy, responsibility, and attention, particularly benefiting those with lower initial scores on these measures. Specific interaction effects suggest that older students and males particularly benefit from these interventions, showcasing the nuanced impact of wisdom skill training across different demographics. Furthermore, this presentation emphasizes the critical role of Social and Emotional Learning (SEL) programs in addressing the multifaceted challenges faced by children and adolescents, including mental health issues, academic performance, and social behaviors. The integration of wisdom skills into school curricula not only fosters individual growth and emotional regulation but also enhances overall school climate and academic achievement. In conclusion, the findings contribute to the growing body of empirical evidence supporting the efficacy of teaching wisdom skills in educational settings. The success of these interventions underscores the potential for widespread implementation of evidence-based programs to promote emotional well-being and academic success among students nationwide.

Keywords: wisdom skills, CBT, cognitive behavioral training, mindfulness, empathy, anxiety

Procedia PDF Downloads 46
314 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects

Authors: Karan Sharma, Ajay Kumar

Abstract:

Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.

Keywords: EEG signal, Reiki, time consuming, epileptic seizure

Procedia PDF Downloads 407
313 Gender Supportive Systems-Key to Good Governance in Agriculture: Challenges and Strategies

Authors: Padmaja Kaja, Kiran Kumar Gellaboina

Abstract:

A lion’s share of agricultural work is contributed by women in India as it is the case in many developing countries, yet women are not securing the pride as a farmer. Many policies are supporting women empowerment in India, especially in agriculture sector considering the importance of sustainable food security. However these policies many times failed to achieve the targeted results of mainstreaming gender. Implementing the principles of governance would lead to gender equality in agriculture. This paper deals with the social norms and obligations prevailed with reference to Indian context which abstain women from having resources. This paper is formulated by using primary research done in eight districts of Telangana and Andhra Pradesh states of India supported by secondary research. Making amendments to Hindu Succession Act in united Andhra Pradesh much prior to the positioning of the amended act in the whole country lead to a better land holding a share of women in Andhra Pradesh. The policies like registering government distributed lands in the name of women in the state also have an added value. However, the women participation in decision-making process in agriculture is limited in elite families when compared to socially under privileged families, further too it was higher in drought affected districts like Mahbubnagar in Telangana when compared to resource-rich East Godavari district in Andhra Pradesh. Though National Gender Resource Centre for Agriculture (NGRCA) at centre and Gender Cells in the states were established a decade ago, extension reach to the women farmers is still lagging behind. Capturing the strength of women self groups in India especially in Andhra Pradesh to link up with agriculture extension might improve the extension reach of women farmers. Maintenance of micro level women data sets, creating women farmers networks with government departments like agriculture, irrigation, revenue and formal credit institutes would result in good governance to mainstream gender in agriculture. Further to add that continuous monitoring and impact assessments of the programmes and projects for gender inclusiveness would reiterate the government efforts.

Keywords: food security, gender, governance, mainstreaming

Procedia PDF Downloads 247
312 A Rapid Colorimetric Assay for Direct Detection of Unamplified Hepatitis C Virus RNA Using Gold Nanoparticles

Authors: M. Shemis, O. Maher, G. Casterou, F. Gauffre

Abstract:

Hepatitis C virus (HCV) is a major cause of chronic liver disease with a global 170 million chronic carriers at risk of developing liver cirrhosis and/or liver cancer. Egypt reports the highest prevalence of HCV worldwide. Currently, two classes of assays are used in the diagnosis and management of HCV infection. Despite the high sensitivity and specificity of the available diagnostic assays, they are time-consuming, labor-intensive, expensive, and require specialized equipment and highly qualified personal. It is therefore important for clinical and economic terms to develop a low-tech assay for the direct detection of HCV RNA with acceptable sensitivity and specificity, short turnaround time, and cost-effectiveness. Such an assay would be critical to control HCV in developing countries with limited resources and high infection rates, such as Egypt. The unique optical and physical properties of gold nanoparticles (AuNPs) have allowed the use of these nanoparticles in developing simple and rapid colorimetric assays for clinical diagnosis offering higher sensitivity and specificity than current detection techniques. The current research aims to develop a detection assay for HCV RNA using gold nanoparticles (AuNPs). Methods: 200 anti-HCV positive samples and 50 anti-HCV negative plasma samples were collected from Egyptian patients. HCV viral load was quantified using m2000rt (Abbott Molecular Inc., Des Plaines, IL). HCV genotypes were determined using multiplex nested RT- PCR. The assay is based on the aggregation of AuNPs in presence of the target RNA. Aggregation of AuNPs causes a color shift from red to blue. AuNPs were synthesized using citrate reduction method. Different sets of probes within the 5’ UTR conserved region of the HCV genome were designed, grafted on AuNPs and optimized for the efficient detection of HCV RNA. Results: The nano-gold assay could colorimetrically detect HCV RNA down to 125 IU/ml with sensitivity and specificity of 91.1% and 93.8% respectively. The turnaround time of the assay is < 30 min. Conclusions: The assay allows sensitive and rapid detection of HCV RNA and represents an inexpensive and simple point-of-care assay for resource-limited settings.

Keywords: HCV, gold nanoparticles, point of care, viral load

Procedia PDF Downloads 206
311 Discharge Estimation in a Two Flow Braided Channel Based on Energy Concept

Authors: Amiya Kumar Pati, Spandan Sahu, Kishanjit Kumar Khatua

Abstract:

River is our main source of water which is a form of open channel flow and the flow in the open channel provides with many complex phenomena of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress, and depth-averaged velocity. The development of society, more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. A river flow consisting of small and shallow channels sometimes divide and recombine numerous times because of the slow water flow or the built up sediments. The pattern formed during this process resembles the strands of a braid. Braided streams form where the sediment load is so heavy that some of the sediments are deposited as shifting islands. Braided rivers often exist near the mountainous regions and typically carry coarse-grained and heterogeneous sediments down a fairly steep gradient. In this paper, the apparent shear stress formulae were suitably modified, and the Energy Concept Method (ECM) was applied for the prediction of discharges at the junction of a two-flow braided compound channel. The Energy Concept Method has not been applied for estimating the discharges in the braided channels. The energy loss in the channels is analyzed based on mechanical analysis. The cross-section of channel is divided into two sub-areas, namely the main-channel below the bank-full level and region above the bank-full level for estimating the total discharge. The experimental data are compared with a wide range of theoretical data available in the published literature to verify this model. The accuracy of this approach is also compared with Divided Channel Method (DCM). From error analysis of this method, it is observed that the relative error is less for the data-sets having smooth floodplains when compared to rough floodplains. Comparisons with other models indicate that the present method has reasonable accuracy for engineering purposes.

Keywords: critical flow, energy concept, open channel flow, sediment, two-flow braided compound channel

Procedia PDF Downloads 127