Search results for: generalized random graphs
516 Comparison between Two Software Packages GSTARS4 and HEC-6 about Prediction of the Sedimentation Amount in Dam Reservoirs and to Estimate Its Efficient Life Time in the South of Iran
Authors: Fatemeh Faramarzi, Hosein Mahjoob
Abstract:
Building dams on rivers for utilization of water resources causes problems in hydrodynamic equilibrium and results in leaving all or part of the sediments carried by water in dam reservoir. This phenomenon has also significant impacts on water and sediment flow regime and in the long term can cause morphological changes in the environment surrounding the river, reducing the useful life of the reservoir which threatens sustainable development through inefficient management of water resources. In the past, empirical methods were used to predict the sedimentation amount in dam reservoirs and to estimate its efficient lifetime. But recently the mathematical and computational models are widely used in sedimentation studies in dam reservoirs as a suitable tool. These models usually solve the equations using finite element method. This study compares the results from tow software packages, GSTARS4 & HEC-6, in the prediction of the sedimentation amount in Dez dam, southern Iran. The model provides a one-dimensional, steady-state simulation of sediment deposition and erosion by solving the equations of momentum, flow and sediment continuity and sediment transport. GSTARS4 (Generalized Sediment Transport Model for Alluvial River Simulation) which is based on a one-dimensional mathematical model that simulates bed changes in both longitudinal and transverse directions by using flow tubes in a quasi-two-dimensional scheme to calibrate a period of 47 years and forecast the next 47 years of sedimentation in Dez Dam, Southern Iran. This dam is among the highest dams all over the world (with its 203 m height), and irrigates more than 125000 square hectares of downstream lands and plays a major role in flood control in the region. The input data including geometry, hydraulic and sedimentary data, starts from 1955 to 2003 on a daily basis. To predict future river discharge, in this research, the time series data were assumed to be repeated after 47 years. Finally, the obtained result was very satisfactory in the delta region so that the output from GSTARS4 was almost identical to the hydrographic profile in 2003. In the Dez dam due to the long (65 km) and a large tank, the vertical currents are dominant causing the calculations by the above-mentioned method to be inaccurate. To solve this problem, we used the empirical reduction method to calculate the sedimentation in the downstream area which led to very good answers. Thus, we demonstrated that by combining these two methods a very suitable model for sedimentation in Dez dam for the study period can be obtained. The present study demonstrated successfully that the outputs of both methods are the same.Keywords: Dez Dam, prediction, sedimentation, water resources, computational models, finite element method, GSTARS4, HEC-6
Procedia PDF Downloads 313515 Correlates of Modes of Transportation to Work among Working Adults in Ernakulam District, Kerala
Authors: Anjaly Joseph, Elezebeth Mathews
Abstract:
Transportation and urban planning is the least recognised area for physical activity promotion in India, unlike developed regions. Identifying the preferred transportation modalities and factors associated with it is essential to address these lacunae. The objective of the study was to assess the prevalence of modes of transportation to work, and its correlates among working adults in Ernakulam District, Kerala. A cross sectional study was conducted among 350 working individuals in the age group of 18-60 years, selected through multi-staged stratified random sampling in Ernakulam district of Kerala. The inclusion criteria were working individuals 18-60 years, workplace at a distance of more than 1 km from the home and who worked five or more days a week. Pregnant women/women on maternity leave and drivers (taxi drivers, autorickshaw drivers, and lorry drivers) were excluded. An interview schedule was used to capture the modes of transportation namely, public, private and active transportation, socio demographic details, travel behaviour, anthropometric measurements and health status. Nearly two-thirds (64 percent) of them used private transportation to work, while active commuters were only 6.6 percent. The correlates identified for active commuting compared to other modes were low socio-economic status (OR=0.22, CI=0.5-0.85) and presence of a driving license (OR=4.95, CI= 1.59-15.45). The correlates identified for public transportation compared to private transportation were female gender (OR= 17.79, CI= 6.26-50.31), low income (OR=0.33, CI= 0.11-0.93), being unmarried (OR=5.19, CI=1.46-8.37), presence of no or only one private vehicle in the house (OR=4.23, CI=1.24-20.54) and presence of convenient public transportation facility to workplace (OR=3.97, CI= 1.66-9.47). The association between body mass index (BMI) and public transportation were explored and found that public transport users had lesser BMI than private commuters (OR=2.30, CI=1.23-4.29). Policies that encourage active and public transportation needs to be introduced such as discouraging private vehicle through taxes, introduction of convenient and safe public transportation facility, walking/cycling paths, and paid parking facility.Keywords: active transportation, correlates, India, public transportation, transportation modes
Procedia PDF Downloads 164514 Role of Financial Institutions in Promoting Micro Service Enterprises with Special Reference to Hairdressing Salons
Authors: Gururaj Bhajantri
Abstract:
Financial sector is the backbone of any economy and it plays a crucial role in the mobilisation and allocation of resources. One of the main objectives of financial sector is inclusive growth. The constituents of the financial sector are banks, and financial Institutions, which mobilise the resources from the surplus sector and channelize the same to the different needful sectors in the economy. Micro Small and the Medium Enterprises sector in India cover a wide range of economic activities. These enterprises are divided on the basis of investment on equipment. The micro enterprises are divided into manufacturing and services sector. Micro Service enterprises have investment limit up to ten lakhs on equipment. Hairdresser is one who not only cuts and shaves but also provides different types of hair cut, hairstyles, trimming, hair-dye, massage, manicure, pedicure, nail services, colouring, facial, makeup application, waxing, tanning and other beauty treatments etc., hairdressing salons provide these services with the help of equipment. They need investment on equipment not more than ten lakhs. Hence, they can be considered as Micro service enterprises. Hairdressing salons require more than Rs 2.50,000 to start a moderate salon. Moreover, hairdressers are unable to access the organised finance. Still these individuals access finance from money lenders with high rate of interest to lead life. The socio economic conditions of hairdressers are not known properly. Hence, the present study brings a light on the role of financial institutions in promoting hairdressing salons. The study also focuses the socio-economic background of individuals in hairdressings salons, problems faced by them. The present study is based on primary and secondary data. Primary data collected among hairdressing salons in Davangere city. Samples selected with the help of simple random sampling techniques. Collected data analysed and interpreted with the help of simple statistical tools.Keywords: micro service enterprises, financial institutions, hairdressing salons, financial sector
Procedia PDF Downloads 205513 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach
Authors: James Ladzekpo
Abstract:
Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.Keywords: diabetes, machine learning, prediction, biomarkers
Procedia PDF Downloads 55512 Integration of Educational Data Mining Models to a Web-Based Support System for Predicting High School Student Performance
Authors: Sokkhey Phauk, Takeo Okazaki
Abstract:
The challenging task in educational institutions is to maximize the high performance of students and minimize the failure rate of poor-performing students. An effective method to leverage this task is to know student learning patterns with highly influencing factors and get an early prediction of student learning outcomes at the timely stage for setting up policies for improvement. Educational data mining (EDM) is an emerging disciplinary field of data mining, statistics, and machine learning concerned with extracting useful knowledge and information for the sake of improvement and development in the education environment. The study is of this work is to propose techniques in EDM and integrate it into a web-based system for predicting poor-performing students. A comparative study of prediction models is conducted. Subsequently, high performing models are developed to get higher performance. The hybrid random forest (Hybrid RF) produces the most successful classification. For the context of intervention and improving the learning outcomes, a feature selection method MICHI, which is the combination of mutual information (MI) and chi-square (CHI) algorithms based on the ranked feature scores, is introduced to select a dominant feature set that improves the performance of prediction and uses the obtained dominant set as information for intervention. By using the proposed techniques of EDM, an academic performance prediction system (APPS) is subsequently developed for educational stockholders to get an early prediction of student learning outcomes for timely intervention. Experimental outcomes and evaluation surveys report the effectiveness and usefulness of the developed system. The system is used to help educational stakeholders and related individuals for intervening and improving student performance.Keywords: academic performance prediction system, educational data mining, dominant factors, feature selection method, prediction model, student performance
Procedia PDF Downloads 106511 Molecular Characterization of Ovine Herpesvirus 2 Strains Based on Selected Glycoprotein and Tegument Genes
Authors: Fulufhelo Amanda Doboro, Kgomotso Sebeko, Stephen Njiro, Moritz Van Vuuren
Abstract:
Ovine herpesvirus 2 (OvHV-2) genome obtained from the lymphopblastoid cell line of a BJ1035 cow was recently sequenced in the United States of America (USA). Information on the sequences of OvHV-2 genes obtained from South African strains from bovine or other African countries and molecular characterization of OvHV-2 is not documented. Present investigation provides information on the nucleotide and derived amino acid sequences and genetic diversity of Ov 7, Ov 8 ex2, ORF 27 and ORF 73 genes, of these genes from OvHV-2 strains circulating in South Africa. Gene-specific primers were designed and used for PCR of DNA extracted from 42 bovine blood samples that previously tested positive for OvHV-2. The expected PCR products of 495 bp, 253 bp, 890 bp and 1632 bp respectively for Ov 7, Ov 8 ex2, ORF 27 and ORF 73 genes were sequenced and multiple sequence analysis done on the selected regions of the sequenced PCR products. Two genotypes for ORF 27 and ORF 73 gene sequences, and three genotypes for Ov 7 and Ov 8 ex2 gene sequences were identified, and similar groupings for the derived amino acid sequences were obtained for each gene. Nucleotide and amino acid sequence variations that led to the identification of the different genotypes included SNPs, deletions and insertions. Sequence analysis of Ov 7 and ORF 27 genes revealed variations that distinguished between sequences from SA and reference OvHV-2 strains. The implication of geographic origin among SA sequences was difficult to evaluate because of random distribution of genotypes in the different provinces, for each gene. However, socio-economic factors such as migration of people with animals, or transportation of animals for agricultural or business use from one province to another are most likely to be responsible for this observation. The sequence variations observed in this study have no impact on the antibody binding activities of glycoproteins encoded by Ov 7, Ov 8 ex2 and ORF 27 genes, as determined by prediction of the presence of B cell epitopes using BepiPred 1.0. The findings of this study will be used for selection of gene candidates for the development of diagnostic assays and vaccine development as well.Keywords: amino acid, genetic diversity, genes, nucleotide
Procedia PDF Downloads 491510 Phylogenetic Analysis Based On the Internal Transcribed Spacer-2 (ITS2) Sequences of Diadegma semiclausum (Hymenoptera: Ichneumonidae) Populations Reveals Significant Adaptive Evolution
Authors: Ebraheem Al-Jouri, Youssef Abu-Ahmad, Ramasamy Srinivasan
Abstract:
The parasitoid, Diadegma semiclausum (Hymenoptera: Ichneumonidae) is one of the most effective exotic parasitoids of diamondback moth (DBM), Plutella xylostella in the lowland areas of Homs, Syria. Molecular evolution studies are useful tools to shed light on the molecular bases of insect geographical spread and adaptation to new hosts and environment and for designing better control strategies. In this study, molecular evolution analysis was performed based on the 42 nuclear internal transcribed spacer-2 (ITS2) sequences representing the D. semiclausum and eight other Diadegma spp. from Syria and worldwide. Possible recombination events were identified by RDP4 program. Four potential recombinants of the American D. insulare and D. fenestrale (Jeju) were detected. After detecting and removing recombinant sequences, the ratio of non-synonymous (dN) to synonymous (dS) substitutions per site (dN/dS=ɷ) has been used to identify codon positions involved in adaptive processes. Bayesian techniques were applied to detect selective pressures at a codon level by using five different approaches including: fixed effects likelihood (FEL), internal fixed effects likelihood (IFEL), random effects method (REL), mixed effects model of evolution (MEME) and Program analysis of maximum liklehood (PAML). Among the 40 positively selected amino acids (aa) that differed significantly between clades of Diadegma species, three aa under positive selection were only identified in D. semiclausum. Additionally, all D. semiclausum branches tree were highly found under episodic diversifying selection (EDS) at p≤0.05. Our study provide evidence that both recombination and positive selection have contributed to the molecular diversity of Diadegma spp. and highlights the significant contribution of D. semiclausum in adaptive evolution and influence the fitness in the DBM parasitoid.Keywords: diadegma sp, DBM, ITS2, phylogeny, recombination, dN/dS, evolution, positive selection
Procedia PDF Downloads 416509 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 127508 Effects of Self-Management Programs on Blood Pressure Control, Self-Efficacy, Medication Adherence, and Body Mass Index among Older Adult Patients with Hypertension: Meta-Analysis of Randomized Controlled Trials
Authors: Van Truong Pham
Abstract:
Background: Self-management was described as a potential strategy for blood pressure control in patients with hypertension. However, the effects of self-management interventions on blood pressure, self-efficacy, medication adherence, and body mass index (BMI) in older adults with hypertension have not been systematically evaluated. We evaluated the effects of self-management interventions on systolic blood pressure (SBP) and diastolic blood pressure (DBP), self-efficacy, medication adherence, and BMI in hypertensive older adults. Methods: We followed the recommended guidelines of preferred reporting items for systematic reviews and meta-analyses. Searches in electronic databases including CINAHL, Cochrane Library, Embase, Ovid-Medline, PubMed, Scopus, Web of Science, and other sources were performed to include all relevant studies up to April 2019. Studies selection, data extraction, and quality assessment were performed by two reviewers independently. We summarized intervention effects as Hedges' g values and 95% confidence intervals (CI) using a random-effects model. Data were analyzed using Comprehensive Meta-Analysis software 2.0. Results: Twelve randomized controlled trials met our inclusion criteria. The results revealed that self-management interventions significantly improved blood pressure control, self-efficacy, medication adherence, whereas the effect of self-management on BMI was not significant in older adult patients with hypertension. The following Hedges' g (effect size) values were obtained: SBP, -0.34 (95% CI, -0.51 to -0.17, p < 0.001); DBP, -0.18 (95% CI, -0.30 to -0.05, p < 0.001); self-efficacy, 0.93 (95%CI, 0.50 to 1.36, p < 0.001); medication adherence, 1.72 (95%CI, 0.44 to 3.00, p=0.008); and BMI, -0.57 (95%CI, -1.62 to 0.48, p = 0.286). Conclusions: Self-management interventions significantly improved blood pressure control, self-efficacy, and medication adherence. However, the effects of self-management on obesity control were not supported by the evidence. Healthcare providers should implement self-management interventions to strengthen patients' role in managing their health care.Keywords: self-management, meta-analysis, blood pressure control, self-efficacy, medication adherence, body mass index
Procedia PDF Downloads 128507 Linguistic Insights Improve Semantic Technology in Medical Research and Patient Self-Management Contexts
Authors: William Michael Short
Abstract:
Semantic Web’ technologies such as the Unified Medical Language System Metathesaurus, SNOMED-CT, and MeSH have been touted as transformational for the way users access online medical and health information, enabling both the automated analysis of natural-language data and the integration of heterogeneous healthrelated resources distributed across the Internet through the use of standardized terminologies that capture concepts and relationships between concepts that are expressed differently across datasets. However, the approaches that have so far characterized ‘semantic bioinformatics’ have not yet fulfilled the promise of the Semantic Web for medical and health information retrieval applications. This paper argues within the perspective of cognitive linguistics and cognitive anthropology that four features of human meaning-making must be taken into account before the potential of semantic technologies can be realized for this domain. First, many semantic technologies operate exclusively at the level of the word. However, texts convey meanings in ways beyond lexical semantics. For example, transitivity patterns (distributions of active or passive voice) and modality patterns (configurations of modal constituents like may, might, could, would, should) convey experiential and epistemic meanings that are not captured by single words. Language users also naturally associate stretches of text with discrete meanings, so that whole sentences can be ascribed senses similar to the senses of words (so-called ‘discourse topics’). Second, natural language processing systems tend to operate according to the principle of ‘one token, one tag’. For instance, occurrences of the word sound must be disambiguated for part of speech: in context, is sound a noun or a verb or an adjective? In syntactic analysis, deterministic annotation methods may be acceptable. But because natural language utterances are typically characterized by polyvalency and ambiguities of all kinds (including intentional ambiguities), such methods leave the meanings of texts highly impoverished. Third, ontologies tend to be disconnected from everyday language use and so struggle in cases where single concepts are captured through complex lexicalizations that involve profile shifts or other embodied representations. More problematically, concept graphs tend to capture ‘expert’ technical models rather than ‘folk’ models of knowledge and so may not match users’ common-sense intuitions about the organization of concepts in prototypical structures rather than Aristotelian categories. Fourth, and finally, most ontologies do not recognize the pervasively figurative character of human language. However, since the time of Galen the widespread use of metaphor in the linguistic usage of both medical professionals and lay persons has been recognized. In particular, metaphor is a well-documented linguistic tool for communicating experiences of pain. Because semantic medical knowledge-bases are designed to help capture variations within technical vocabularies – rather than the kinds of conventionalized figurative semantics that practitioners as well as patients actually utilize in clinical description and diagnosis – they fail to capture this dimension of linguistic usage. The failure of semantic technologies in these respects degrades the efficiency and efficacy not only of medical research, where information retrieval inefficiencies can lead to direct financial costs to organizations, but also of care provision, especially in contexts of patients’ self-management of complex medical conditions.Keywords: ambiguity, bioinformatics, language, meaning, metaphor, ontology, semantic web, semantics
Procedia PDF Downloads 132506 Entrepreneurship Education: A Panacea for Entrepreneurial Intention of University Undergraduates in Ogun State, Nigeria
Authors: Adedayo Racheal Agbonna
Abstract:
The rising level of graduate unemployment in Nigeria has brought about the introduction of entrepreneurship education as a career option for self–reliance and self-employment. Sequel to this, it is important to have an understanding of the determining factors of entrepreneurial intention. Therefore this research empirically investigated the influence of entrepreneurship education on entrepreneurial intention of undergraduate students of selected universities in Ogun State, Nigeria. The study is significant to researchers, university policy makers, and the government. Survey research design was adopted in the study. The population consisted of 17,659 final year undergraduate students universities in Ogun State. The study adopted stratified and random sampling technique. The table of sample size determination was used to determine the sample size for this study at 95% confidence level and 5% margin error to arrive at a sample size of 1877 respondents. The elements of population were 400 level students of the selected universities. A structured questionnaire titled 'Entrepreneurship Education and students’ Entrepreneurial intention' was administered. The result of the reliability test had the following values 0.716, 0.907 and 0.949 for infrastructure, perceived university support, and entrepreneurial intention respectively. In the same vein, from the construct validity test, the following values were obtained 0.711, 0.663 and 0.759 for infrastructure, perceived university support and entrepreneurial intention respectively. Findings of this study revealed that each of the entrepreneurship education variables significantly affected intention University infrastructure B= -1.200, R²=0.679, F (₁,₁₈₇₅) = 3958.345, P < 0.05) Perceived University Support B= -1.027, R²=0.502, F(₁,₁₈₇₅) = 1924.612, P < 0.05). The perception of respondents in public university and private university on entrepreneurship education have a statistically significant difference [F(₁,₁₈₇₅) = 134.614, p < 0.05) α F(₁,₁₈₇₅) = 363.439]. The study concluded that entrepreneurship education positively influenced entrepreneurial intention of undergraduate students in Ogun State, Nigeria. Also, university infrastructure and perceived university support have negative and significant effect on entrepreneurial intention. The study recommended that to promote entrepreneurial intention of university undergraduate students, infrastructures and the university support that can arouse entrepreneurial intention of students should be put in place.Keywords: entrepreneurship education, entrepreneurial intention, perceived university support, university infrastructure
Procedia PDF Downloads 235505 The Influence of the Institutional Environment in Increasing Wealth: The Case of Women Business Operators in a Rural Setting
Authors: S. Archsana, Vajira Balasuriya
Abstract:
In Trincomalee of Sri Lanka, a post-conflict area, resettlement projects and policy initiatives are taking place to improve the wealth of the rural communities through promoting economic activities by way of encouraging the rural women to opt to commence and operate Micro and Small Scale (MSS) businesses. This study attempts to identify the manner in which the institutional environment could facilitate these MSS businesses owned and operated by women in the rural environment. The respondents of this study are the beneficiaries of the Divi Neguma Development Training Program (DNDTP); a project designed to aid women owned MSS businesses, in Trincomalee district. 96 women business operators, who had obtained financing facilities from the DNDTP, are taken as the sample based on fixed interval random sampling method. The study reveals that primary challenges encountered by 82% of the women business operators are lack of initial capital followed by 71% initial market finding and 35% access to technology. The low level of education and language barriers are the constraints in accessing support agencies/service providers. Institutional support; specifically management and marketing services, have a significant relationship with wealth augmentation. Institutional support at the setting-up stage of businesses are thin whereas terms and conditions of the finance facilities are perceived as ‘too challenging’. Although diversification enhances wealth of the rural women business operators, assistance from the institutional framework to prepare financial reports that are required for business expansion is skinny. The study further reveals that institutional support is very much weak in terms of providing access to new technology and identifying new market networks. A mechanism that could facilitate the institutional framework to support the rural women business operators to access new technology and untapped market segments, and assistance in preparation of legal and financial documentation is recommended.Keywords: business facilitation, institutional support, rural women business operators, wealth augmentation
Procedia PDF Downloads 437504 Examining Statistical Monitoring Approach against Traditional Monitoring Techniques in Detecting Data Anomalies during Conduct of Clinical Trials
Authors: Sheikh Omar Sillah
Abstract:
Introduction: Monitoring is an important means of ensuring the smooth implementation and quality of clinical trials. For many years, traditional site monitoring approaches have been critical in detecting data errors but not optimal in identifying fabricated and implanted data as well as non-random data distributions that may significantly invalidate study results. The objective of this paper was to provide recommendations based on best statistical monitoring practices for detecting data-integrity issues suggestive of fabrication and implantation early in the study conduct to allow implementation of meaningful corrective and preventive actions. Methodology: Electronic bibliographic databases (Medline, Embase, PubMed, Scopus, and Web of Science) were used for the literature search, and both qualitative and quantitative studies were sought. Search results were uploaded into Eppi-Reviewer Software, and only publications written in the English language from 2012 were included in the review. Gray literature not considered to present reproducible methods was excluded. Results: A total of 18 peer-reviewed publications were included in the review. The publications demonstrated that traditional site monitoring techniques are not efficient in detecting data anomalies. By specifying project-specific parameters such as laboratory reference range values, visit schedules, etc., with appropriate interactive data monitoring, statistical monitoring can offer early signals of data anomalies to study teams. The review further revealed that statistical monitoring is useful to identify unusual data patterns that might be revealing issues that could impact data integrity or may potentially impact study participants' safety. However, subjective measures may not be good candidates for statistical monitoring. Conclusion: The statistical monitoring approach requires a combination of education, training, and experience sufficient to implement its principles in detecting data anomalies for the statistical aspects of a clinical trial.Keywords: statistical monitoring, data anomalies, clinical trials, traditional monitoring
Procedia PDF Downloads 77503 A Study of the Use of Arguments in Nominalizations as Instanciations of Grammatical Metaphors Finished in -TION in Academic Texts of Native Speakers
Authors: Giovana Perini-Loureiro
Abstract:
The purpose of this research was to identify whether the nominalizations terminating in -TION in the academic discourse of native English speakers contain the arguments required by their input verbs. In the perspective of functional linguistics, ideational metaphors, with nominalization as their most pervasive realization, are lexically dense, and therefore frequent in formal texts. Ideational metaphors allow the academic genre to instantiate objectification, de-personalization, and the ability to construct a chain of arguments. The valence of those nouns present in nominalizations tends to maintain the same elements of the valence from its original verbs, but these arguments are not always expressed. The initial hypothesis was that these arguments would also be present alongside the nominalizations, through anaphora or cataphora. In this study, a qualitative analysis of the occurrences of the five more frequent nominalized terminations in -TION in academic texts was accomplished, and thus a verification of the occurrences of the arguments required by the original verbs. The assembling of the concordance lines was done through COCA (Corpus of Contemporary American English). After identifying the five most frequent nominalizations (attention, action, participation, instruction, intervention), the concordance lines were selected at random to be analyzed, assuring the representativeness and reliability of the sample. It was possible to verify, in all the analyzed instances, the presence of arguments. In most instances, the arguments were not expressed, but recoverable, either in the context or in the shared knowledge among the interactants. It was concluded that the realizations of the arguments which were not expressed alongside the nominalizations are part of a continuum, starting from the immediate context with anaphora and cataphora; up to a knowledge shared outside the text, such as specific area knowledge. The study also has implications for the teaching of academic writing, especially with regards to the impact of nominalizations on the thematic and informational flow of the text. Grammatical metaphors are essential to academic writing, hence acknowledging the occurrence of its arguments is paramount to achieve linguistic awareness and the writing prestige required by the academy.Keywords: corpus, functional linguistics, grammatical metaphors, nominalizations, academic English
Procedia PDF Downloads 146502 The Impact of Adopting Cross Breed Dairy Cows on Households’ Income and Food Security in the Case of Dejen Woreda, Amhara Region, Ethiopia
Authors: Misganaw Chere Siferih
Abstract:
This study assessed the impact of crossbreed dairy cows on household income and food security. The study area is found in Dejen Woreda, East Gojam Zone, and Amhara region of Ethiopia. Random sampling technique was used to obtain a sample of 80 crossbreed dairy cow owners and 176 indigenous dairy cow owners. The study employed food consumption score analytical framework to measure food security status of the household. No Statistical significant mean difference is found between crossbreed owners and indigenous owners. Logistic regression was employed to investigate crossbreed dairy cow adoption determinants , the result indicates that gender, education, labor number, land size cultivated, dairy cooperatives membership, net income and food security status of the household are statistically significant independent variables, which explained the binary dependent variable, crossbreed dairy cow adoption. Propensity score matching (PSM) was employed to analyze the impact of crossbreed dairy cow owners on farmers’ income and food security. The average net income of crossbreed dairy cow owners was found to be significantly higher than indigenous dairy cow owners. Estimates of average treatment effect of the treated (ATT) indicated that crossbreed dairy cow is able to impact households’ net income by 42%, 38.5%, 30.8% and 44.5% higher in kernel, radius, nearest neighborhood and stratification matching algorithms respectively as compared to indigenous dairy cow owners. However, estimates of average treatment of the treated (ATT) suggest that being an owner of crossbreed dairy cow is not able to affect food security significantly. Thus, crossbreed dairy cow enables farmers to increase income but not their food security in the study area. Finally, the study recommended establishing dairy cooperatives and advice farmers to become a member of them, attention to promoting the impact of crossbreed dairy cows and promotion of nutrition focus projects.Keywords: crossbreed dairy cow, net income, food security, propensity score matching
Procedia PDF Downloads 65501 Hydrological Analysis for Urban Water Management
Authors: Ranjit Kumar Sahu, Ramakar Jha
Abstract:
Urban Water Management is the practice of managing freshwater, waste water, and storm water as components of a basin-wide management plan. It builds on existing water supply and sanitation considerations within an urban settlement by incorporating urban water management within the scope of the entire river basin. The pervasive problems generated by urban development have prompted, in the present work, to study the spatial extent of urbanization in Golden Triangle of Odisha connecting the cities Bhubaneswar (20.2700° N, 85.8400° E), Puri (19.8106° N, 85.8314° E) and Konark (19.9000° N, 86.1200° E)., and patterns of periodic changes in urban development (systematic/random) in order to develop future plans for (i) urbanization promotion areas, and (ii) urbanization control areas. Remote Sensing, using USGS (U.S. Geological Survey) Landsat8 maps, supervised classification of the Urban Sprawl has been done for during 1980 - 2014, specifically after 2000. This Work presents the following: (i) Time series analysis of Hydrological data (ground water and rainfall), (ii) Application of SWMM (Storm Water Management Model) and other soft computing techniques for Urban Water Management, and (iii) Uncertainty analysis of model parameters (Urban Sprawl and correlation analysis). The outcome of the study shows drastic growth results in urbanization and depletion of ground water levels in the area that has been discussed briefly. Other relative outcomes like declining trend of rainfall and rise of sand mining in local vicinity has been also discussed. Research on this kind of work will (i) improve water supply and consumption efficiency (ii) Upgrade drinking water quality and waste water treatment (iii) Increase economic efficiency of services to sustain operations and investments for water, waste water, and storm water management, and (iv) engage communities to reflect their needs and knowledge for water management.Keywords: Storm Water Management Model (SWMM), uncertainty analysis, urban sprawl, land use change
Procedia PDF Downloads 425500 Existential Affordances and Psychopathology: A Gibsonian Analysis of Dissociative Identity Disorder
Authors: S. Alina Wang
Abstract:
A Gibsonian approach is used to understand the existential dimensions of the human ecological niche. Then, this existential-Gibsonian framework is applied to rethinking Hacking’s historical analysis of multiple personality disorder. This research culminates in a generalized account of psychiatric illness from an enactivist lens. In conclusion, reflections on the implications of this account on approaches to psychiatric treatment are mentioned. J.J. Gibson’s theory of affordances centered on affordances of sensorimotor varieties, which guide basic behaviors relative to organisms’ vital needs and physiological capacities (1979). Later theorists, notably Neisser (1988) and Rietveld (2014), expanded on the theory of affordances to account for uniquely human activities relative to the emotional, intersubjective, cultural, and narrative aspects of the human ecological niche. This research shows that these affordances are structured by what Haugeland (1998) calls existential commitments, which draws on Heidegger’s notion of dasein (1927) and Merleau-Ponty’s account of existential freedom (1945). These commitments organize the existential affordances that fill an individual’s environment and guide their thoughts, emotions, and behaviors. This system of a priori existential commitments and a posteriori affordances is called existential enactivism. For humans, affordances do not only elicit motor responses and appear as objects with instrumental significance. Affordances also, and possibly primarily, determine so-called affective and cognitive activities and structure the wide range of kinds (e.g., instrumental, aesthetic, ethical) of significances of objects found in the world. Then existential enactivism is applied to understanding the psychiatric phenomenon of multiple personality disorder (precursor of the current diagnosis of dissociative identity disorder). A reinterpretation of Hacking’s (1998) insights into the history of this particular disorder and his generalizations on the constructed nature of most psychiatric illness is taken on. Enactivist approaches sensitive to existential phenomenology can provide a deeper understanding of these matters. Conceptualizing psychiatric illness as strictly a disorder in the head (whether parsed as a disorder of brain chemicals or meaning-making capacities encoded in psychological modules) is incomplete. Rather, psychiatric illness must also be understood as a disorder in the world, or in the interconnected networks of existential affordances that regulate one’s emotional, intersubjective, and narrative capacities. All of this suggests that an adequate account of psychiatric illness must involve (1) the affordances that are the sources of existential hindrance, (2) the existential commitments structuring these affordances, and (3) the conditions of these existential commitments. Approaches to treatment of psychiatric illness would be more effective by centering on the interruption of normalized behaviors corresponding to affordances targeted as sources of hindrance, the development of new existential commitments, and the practice of new behaviors that erect affordances relative to these reformed commitments.Keywords: affordance, enaction, phenomenology, psychiatry, psychopathology
Procedia PDF Downloads 137499 Effects of Plyometric Exercises on Agility, Power and Speed Improvement of U-17 Female Sprinters in Case of Burayu Athletics Project, Oromia, Ethiopia
Authors: Abdeta Bayissa Mekessa
Abstract:
The purpose of this study was to examine the effects of plyometric exercises on agility, power, and speed and improvement of U-17 female sprinters in the case of the Burayu Athletics project. The true experimental research design was employed for conducting this study. The total populations of the study were 14 U-17 female sprinters from Burayu athletics project. The populations were small in numbers; therefore, the researcher took all as a sample by using comprehensive sampling techniques. These subjects were classified into the Experimental group (N=7) and the Control group (N=7) by using simple random sampling techniques. The Experimental group participated in plyometric training for 8 weeks, 3 days per week and 60 minutes duration per day in addition to their regular training. But, the control groups were following their only regular training program. The variables selected for the purpose of this study were agility, power and speed. The tests were the Illinois agility test, standing long jump test, and 30m sprint test, respectively. Both groups were tested before (pre-test) and after (post-test) 8 weeks of plyometric training. For data analysis, the researcher used SPSS version 26.0 software. The collected data was analyzed using a paired sample t-test to observe the difference between the pre-test and post-test results of the plyometric exercises of the study. The significant level of p<0.05 was considered. The result of the study shows that after 8 weeks of plyometric training, significant improvements were found in Agility (MD=0.45, p<0.05), power (MD=-1.157, P<0.05) and speed (MD=0.37, P<0.05) for experimental group subjects. On the other hand, there was no significant change (P>0.05) in those variables in the control groups. Finally, the findings of the study showed that eight (8) weeks of plyometric exercises had a positive effect on agility, power and speed improvement of female sprinters. Therefore, Athletics coaches and athletes are highly recommended to include plyometric exercise in their training program.Keywords: ploymetric exercise, speed power, aglity, female sprinter
Procedia PDF Downloads 39498 Development of a New Method for the Evaluation of Heat Tolerant Wheat Genotypes for Genetic Studies and Wheat Breeding
Authors: Hameed Alsamadany, Nader Aryamanesh, Guijun Yan
Abstract:
Heat is one of the major abiotic stresses limiting wheat production worldwide. To identify heat tolerant genotypes, a newly designed system involving a large plastic box holding many layers of filter papers positioned vertically with wheat seeds sown in between for the ease of screening large number of wheat geno types was developed and used to study heat tolerance. A collection of 499 wheat geno types were screened under heat stress (35ºC) and non-stress (25ºC) conditions using the new method. Compared with those under non-stress conditions, a substantial and very significant reduction in seedling length (SL) under heat stress was observed with an average reduction of 11.7 cm (P<0.01). A damage index (DI) of each geno type based on SL under the two temperatures was calculated and used to rank the genotypes. Three hexaploid geno types of Triticum aestivum [Perenjori (DI= -0.09), Pakistan W 20B (-0.18) and SST16 (-0.28)], all growing better at 35ºC than at 25ºC were identified as extremely heat tolerant (EHT). Two hexaploid genotypes of T. aestivum [Synthetic wheat (0.93) and Stiletto (0.92)] and two tetraploid genotypes of T. turgidum ssp dicoccoides [G3211 (0.98) and G3100 (0.93)] were identified as extremely heat susceptible (EHS). Another 14 geno types were classified as heat tolerant (HT) and 478 as heat susceptible (HS). Extremely heat tolerant and heat susceptible geno types were used to develop re combinant inbreeding line populations for genetic studies. Four major QTLs, HTI4D, HTI3B.1, HTI3B.2 and HTI3A located on wheat chromosomes 4D, 3B (x2) and 3A, explaining up to 34.67 %, 28.93 %, 13.46% % and 11.34% phenotypic variation, respectively, were detected. The four QTLs together accounted for 88.40% of the total phenotypic variation. Random wheat geno types possessing the four heat tolerant alleles performed significantly better under the heat condition than those lacking the heat tolerant alleles indicating the importance of the four QTLs in conferring heat tolerance in wheat. Molecular markers are being developed for marker assisted breeding of heat tolerant wheat.Keywords: bread wheat, heat tolerance, screening, RILs, QTL mapping, association analysis
Procedia PDF Downloads 551497 Improvement of Plantain Leaves Nutritive Value in Goats by Urea Treatment and Nitrogen Supplements
Authors: Marie Lesly Fontin, Audalbert Bien-Aimé, Didier Marlier, Yves Beckers
Abstract:
Fecal digestibility of mature plantain leaves was determined in castrated Creolegoatsin order to better assess them. Five diets made from plantain leaves were used in an in vivo digestibility study on 20 castrated Creole goats over three periods using a completely random design in order to assess their apparent fecal digestibility (Dg). These diets consisted of sun-dried leaves (DL), sun-dried urea treated leaves (DUTL, 5kg of urea per 100kg of raw product ensilaged during 90 days with 60 kg of water), sun-dried leaves + hoopvine (Trichostigma octandrum, L)(DLH, DL: 61.4% + Hoopvine: 38.6%), sun-dried leaves + urea (DLU, DL: 98.2%+ U: 1.8%), and fresh leaves. (FL).0.5% of salt diluted with water was added to diets before distribution to the goats. A mineral lick block was available for each goat in its digestibility cage. During each period, diets were distributed to meet the maintenance needs of the goats for 21 days, including 14 days of adaptation and 7 days of measurement. Offered and refused diets and feces were weighed every day, and samples were taken for laboratory analysis. Results showed that the urea treatment increasedCP (Crude Protein) content of DL by 44% (from 10.4% for DL to 15.0% for DUTL) and decreased their NDF (Neutral Detergent Fiber) content (55.5% to 52.4%). Large amounts of refused feed (around 40%) were observed in goats fed with FL, DLU, and DL diets, for which no significant difference was observed for DM (Dry Matter) intakes (40.3; 36.6 and 35.1g/kg0.75 respectively) (p>0.05). DM intakes of DUTL (59.9 g/kg0.75) were significantly (p<0.05) greater than DLH (50.2 g/kg0.75). DM Dg of DL was very low (29.2%). However, supplementation with hoopvine and urea treatment resulted in a significant increase of DM Dg (40.3% and 42.1%, respectively), but the addition of urea (DLU) had no effect on it. FL showed a DM Dg similar to DHL and DUTL diets (39.0%). OM (Organic Matter)Dg was higher for the DUTL diet (45.1%), followed by DLH (40.9%), then by DLU and FL (32.9% and 40.7% respectively) and finally by DL (29.8%). CP Dg was higher for the FL diet (65.7%) and lower for the DL diet (39.9%). NDF Dg was also increased with urea treatment (54.8% for DUTL) and with the addition of hoopvine (41.4%) compared to the DL diet (31.0% for DLH). In conclusion, urea treatment and complementation with hoopvine of plantain leaves are the best treatments among those tested for increasing the nutritive value of this foragein the castrated Creole goats.Keywords: apparent fecal digestibility, nitrogen supplements, plantain leaves, urea treatment
Procedia PDF Downloads 215496 Unsupervised Learning and Similarity Comparison of Water Mass Characteristics with Gaussian Mixture Model for Visualizing Ocean Data
Authors: Jian-Heng Wu, Bor-Shen Lin
Abstract:
The temperature-salinity relationship is one of the most important characteristics used for identifying water masses in marine research. Temperature-salinity characteristics, however, may change dynamically with respect to the geographic location and is quite sensitive to the depth at the same location. When depth is taken into consideration, however, it is not easy to compare the characteristics of different water masses efficiently for a wide range of areas of the ocean. In this paper, the Gaussian mixture model was proposed to analyze the temperature-salinity-depth characteristics of water masses, based on which comparison between water masses may be conducted. Gaussian mixture model could model the distribution of a random vector and is formulated as the weighting sum for a set of multivariate normal distributions. The temperature-salinity-depth data for different locations are first used to train a set of Gaussian mixture models individually. The distance between two Gaussian mixture models can then be defined as the weighting sum of pairwise Bhattacharyya distances among the Gaussian distributions. Consequently, the distance between two water masses may be measured fast, which allows the automatic and efficient comparison of the water masses for a wide range area. The proposed approach not only can approximate the distribution of temperature, salinity, and depth directly without the prior knowledge for assuming the regression family, but may restrict the complexity by controlling the number of mixtures when the amounts of samples are unevenly distributed. In addition, it is critical for knowledge discovery in marine research to represent, manage and share the temperature-salinity-depth characteristics flexibly and responsively. The proposed approach has been applied to a real-time visualization system of ocean data, which may facilitate the comparison of water masses by aggregating the data without degrading the discriminating capabilities. This system provides an interface for querying geographic locations with similar temperature-salinity-depth characteristics interactively and for tracking specific patterns of water masses, such as the Kuroshio near Taiwan or those in the South China Sea.Keywords: water mass, Gaussian mixture model, data visualization, system framework
Procedia PDF Downloads 144495 Field Management Solutions Supporting Foreman Executive Tasks
Authors: Maroua Sbiti, Karim Beddiar, Djaoued Beladjine, Romuald Perrault
Abstract:
Productivity is decreasing in construction compared to the manufacturing industry. It seems that the sector is suffering from organizational problems and have low maturity regarding technological advances. High international competition due to the growing context of globalization, complex projects, and shorter deadlines increases these challenges. Field employees are more exposed to coordination problems than design officers. Execution collaboration is then a major issue that can threaten the cost, time, and quality completion of a project. Initially, this paper will try to identify field professional requirements as to address building management process weaknesses such as the unreliability of scheduling, the fickleness of monitoring and inspection processes, the inaccuracy of project’s indicators, inconsistency of building documents and the random logistic management. Subsequently, we will focus our attention on providing solutions to improve scheduling, inspection, and hours tracking processes using emerging lean tools and field mobility applications that bring new perspectives in terms of cooperation. They have shown a great ability to connect various field teams and make informations visual and accessible to planify accurately and eliminate at the source the potential defects. In addition to software as a service use, the adoption of the human resource module of the Enterprise Resource Planning system can allow a meticulous time accounting and thus make the faster decision making. The next step is to integrate external data sources received from or destined to design engineers, logisticians, and suppliers in a holistic system. Creating a monolithic system that consolidates planning, quality, procurement, and resources management modules should be our ultimate target to build the construction industry supply chain.Keywords: lean, last planner system, field mobility applications, construction productivity
Procedia PDF Downloads 115494 Sustainable Happiness of Thai People: Monitoring the Thai Happiness Index
Authors: Kalayanee Senasu
Abstract:
This research investigates the influences of different factors on the happiness of Thai people, including both general factors and sustainable ones. Additionally, this study also monitors Thai people’s happiness via Thai Happiness Index developed in 2017. Besides reflecting happiness level of Thai people, this index also identifies related important issues. The data were collected by both secondary related data and primary survey data collected by interviewed questionnaires. The research data were from stratified multi-stage sampling in region, province, district, and enumeration area, and simple random sampling in each enumeration area. The research data cover 20 provinces, including Bangkok and 4-5 provinces in each region of the North, Northeastern, Central, and South. There were 4,960 usable respondents who were at least 15 years old. Statistical analyses included both descriptive and inferential statistics, including hierarchical regression and one-way ANOVA. The Alkire and Foster method was adopted to develop and calculate the Thai happiness index. The results reveal that the quality of household economy plays the most important role in predicting happiness. The results also indicate that quality of family, quality of health, and effectiveness of public administration in the provincial level have positive effects on happiness at about similar levels. For the socio-economic factors, the results reveal that age, education level, and household revenue have significant effects on happiness. For computing Thai happiness index (THaI), the result reveals the 2018 THaI value is 0.556. When people are divided into four groups depending upon their degree of happiness, it is found that a total of 21.1% of population are happy, with 6.0% called deeply happy and 15.1% called extensively happy. A total of 78.9% of population are not-yet-happy, with 31.8% called narrowly happy, and 47.1% called unhappy. A group of happy population reflects the happiness index THaI valued of 0.789, which is much higher than the THaI valued of 0.494 of the not-yet-happy population. Overall Thai people have higher happiness compared to 2017 when the happiness index was 0.506.Keywords: happiness, quality of life, sustainability, Thai Happiness Index
Procedia PDF Downloads 168493 Factors Influencing the Use of Mobile Phone by Smallholder Farmers in Vegetable Marketing in Fogera District
Authors: Molla Tadesse Lakew
Abstract:
This study was intended to identify the factors influencing the use of mobile phones in vegetable marketing in Fogera district. The use of mobile phones in vegetable marketing and factors influencing mobile phone use were specific objectives of the study. Three kebeles from the Fogera district were selected purposively based on their vegetable production potential. A simple random sampling technique (lottery method) was used to select 153 vegetable producer farmers. Interview schedule and key informants interviews were used to collect primary data. For analyzing the data, descriptive statistics like frequency and percentage, two independent t-tests, and chi-square were used. Furthermore, econometric analysis (binary logistic model) was used to assess the factors influencing mobile phone use for vegetable market information. Contingency coefficient and variance inflation factor were used to check multicollinearity problems between the independent variables. Of 153 respondents, 82 (61.72%) were mobile phone users, while 71 (38.28 %) were mobile phone nonusers. Moreover, the main use of mobile phones in vegetable marketing includes communicating at a distance to save time and minimizing transport costs, getting vegetable marketing price information, identifying markets and buyers to sell the vegetable, deciding when to sell the vegetable, negotiating with buyers for better vegetable prices and for searching of the fast market to avoid from losing of product through perishing. The model result indicated that the level of education, size of land, income, access to credit, and age were significant variables affecting the use of mobile phones in vegetable marketing. It could be recommended to encourage adult education or give training for farmers on how to operate mobile phones and create awareness for the elderly rural farmers as they are able to use the mobile phone for their vegetable marketing. Moreover, farmers should be aware that mobile phones are very important for those who own very small land to get maximum returns from their production. Lastly, providing access to credit and improving and diversifying income sources for the farmers to have mobile phones were recommended to improve the livelihood of farmers.Keywords: mobile phone, farmers, vegetable marketing, Fogera District
Procedia PDF Downloads 73492 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks
Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev
Abstract:
One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.Keywords: channel estimation, inter-cell interference, pilot contamination attacks, wireless communications
Procedia PDF Downloads 217491 Prevalence of Malnutrition and Associated Factors among Children Aged 6-59 Months at Hidabu Abote District, North Shewa, Oromia Regional State
Authors: Kebede Mengistu, Kassahun Alemu, Bikes Destaw
Abstract:
Introduction: Malnutrition continues to be a major public health problem in developing countries. It is the most important risk factor for the burden of diseases. It causes about 300, 000 deaths per year and responsible for more than half of all deaths in children. In Ethiopia, child malnutrition rate is one of the most serious public health problem and the highest in the world. High malnutrition rates in the country pose a significant obstacle to achieving better child health outcomes. Objective: To assess prevalence of malnutrition and associated factors among children aged 6-59 months at Hidabu Abote district, North shewa, Oromia. Methods: A community based cross sectional study was conducted on 820 children aged 6-59 months from September 8-23, 2012 at Hidabu Abote district. Multistage sampling method was used to select households. Children were selected from each kebeles by simple random sampling. Anthropometric measurements and structured questioners were used. Data was processed using EPi-info soft ware and exported to SPSS for analysis. Then after, sex, age, months, height, and weight transferred with HHs number to ENA for SMART 2007software to convert nutritional data into Z-scores of the indices; H/A, W/H and W/A. Bivariate and multivariate logistic regressions were used to identify associated factors of malnutrition. Results: The analysis this study revealed that, 47.6%, 30.9% and 16.7% of children were stunted, underweight and wasted, respectively. The main associated factors of stunting were found to be child age, family monthly income, children were received butter as pre-lacteal feeding and family planning. Underweight was associated with number of children HHs and children were received butter as per-lacteal feeding but un treatment of water in HHs only associated with wasting. Conclusion and recommendation: From the findings of this study, it is concluded that malnutrition is still an important problem among children aged 6-59 months. Therefore, especial attention should be given on intervention of malnutrition.Keywords: children, Hidabu Abote district, malnutrition, public health
Procedia PDF Downloads 427490 Rapid Monitoring of Earthquake Damages Using Optical and SAR Data
Authors: Saeid Gharechelou, Ryutaro Tateishi
Abstract:
Earthquake is an inevitable catastrophic natural disaster. The damages of buildings and man-made structures, where most of the human activities occur are the major cause of casualties from earthquakes. A comparison of optical and SAR data is presented in the case of Kathmandu valley which was hardly shaken by 2015-Nepal Earthquake. Though many existing researchers have conducted optical data based estimated or suggested combined use of optical and SAR data for improved accuracy, however finding cloud-free optical images when urgently needed are not assured. Therefore, this research is specializd in developing SAR based technique with the target of rapid and accurate geospatial reporting. Should considers that limited time available in post-disaster situation offering quick computation exclusively based on two pairs of pre-seismic and co-seismic single look complex (SLC) images. The InSAR coherence pre-seismic, co-seismic and post-seismic was used to detect the change in damaged area. In addition, the ground truth data from field applied to optical data by random forest classification for detection of damaged area. The ground truth data collected in the field were used to assess the accuracy of supervised classification approach. Though a higher accuracy obtained from the optical data then integration by optical-SAR data. Limitation of cloud-free images when urgently needed for earthquak evevent are and is not assured, thus further research on improving the SAR based damage detection is suggested. Availability of very accurate damage information is expected for channelling the rescue and emergency operations. It is expected that the quick reporting of the post-disaster damage situation quantified by the rapid earthquake assessment should assist in channeling the rescue and emergency operations, and in informing the public about the scale of damage.Keywords: Sentinel-1A data, Landsat-8, earthquake damage, InSAR, rapid damage monitoring, 2015-Nepal earthquake
Procedia PDF Downloads 172489 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 101488 Humins: From Industrial By-Product to High Value Polymers
Authors: Pierluigi Tosi, Ed de Jong, Gerard van Klink, Luc Vincent, Alice Mija
Abstract:
During the last decades renewable and low-cost resources have attracted increasingly interest. Carbohydrates can be derived by lignocellulosic biomasses, which is an attractive option since they represent the most abundant carbon source available in nature. Carbohydrates can be converted in a plethora of industrially relevant compounds, such as 5-hydroxymethylfurfural (HMF) and levulinic acid (LA), within acid catalyzed dehydration of sugars with mineral acids. Unfortunately, these acid catalyzed conversions suffer of the unavoidable formation of highly viscous heterogeneous poly-disperse carbon based materials known as humins. This black colored low value by-product is made by a complex mixture of macromolecules built by covalent random condensations of the several compounds present during the acid catalyzed conversion. Humins molecular structure is still under investigation but seems based on furanic rings network linked by aliphatic chains and decorated by several reactive moieties (ketones, aldehydes, hydroxyls, …). Despite decades of research, currently there is no way to avoid humins formation. The key parameter for enhance the economic viability of carbohydrate conversion processes is, therefore, increasing the economic value of the humins by-product. Herein are presented new humins based polymeric materials that can be prepared starting from the raw by-product by thermal treatment, without any step of purification or pretreatment. Humins foams can be produced with the control of reaction key parameters, obtaining polymeric porous materials with designed porosity, density, thermal and electrical conductivity, chemical and electrical stability, carbon amount and mechanical properties. Physico chemical properties can be enhanced by modifications on the starting raw material or adding different species during the polymerization. A comparisons on the properties of different compositions will be presented, along with tested applications. The authors gratefully acknowledge the European Community for financial support through Marie-Curie H2020-MSCA-ITN-2015 "HUGS" Project.Keywords: by-product, humins, polymers, valorization
Procedia PDF Downloads 143487 Design and Development of an Innovative MR Damper Based on Intelligent Active Suspension Control of a Malaysia's Model Vehicle
Authors: L. Wei Sheng, M. T. Noor Syazwanee, C. J. Carolyna, M. Amiruddin, M. Pauziah
Abstract:
This paper exhibits the alternatives towards active suspension systems revised based on the classical passive suspension system to improve comfort and handling performance. An active Magneto rheological (MR) suspension system is proposed as to explore the active based suspension system to enhance performance given its freedom to independently specify the characteristics of load carrying, handling, and ride quality. Malaysian quarter car with two degrees of freedom (2DOF) system is designed and constructed to simulate the actions of an active vehicle suspension system. The structure of a conventional twin-tube shock absorber is modified both internally and externally to comprehend with the active suspension system. The shock absorber peripheral structure is altered to enable the assembling and disassembling of the damper through a non-permanent joint whereby the stress analysis of the designed joint is simulated using Finite Element Analysis. Simulation on the internal part where an electrified copper coil of 24AWG is winded is done using Finite Element Method Magnetics to measure the magnetic flux density inside the MR damper. The primary purpose of this approach is to reduce the vibration transmitted from the effects of road surface irregularities while maintaining solid manoeuvrability. The aim of this research is to develop an intelligent control system of a consecutive damping automotive suspension system. The ride quality is improved by means of the reduction of the vertical body acceleration caused by the car body when it experiences disturbances from speed bump and random road roughness. Findings from this research are expected to enhance the quality of ride which in return can prevent the deteriorating effect of vibration on the vehicle condition as well as the passengers’ well-being.Keywords: active suspension, FEA, magneto rheological damper, Malaysian quarter car model, vibration control
Procedia PDF Downloads 209