Search results for: Control methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7309

Search results for: Control methods

139 Prevention of Corruption in Public Purchases

Authors: Anatoly Krivinsh

Abstract:

The results of dissertation research "Preventing and  combating corruption in public procurement" are presented in this  publication. The study was conducted 2011 till 2013 in a Member  State of the European Union– in the Republic of Latvia.  Goal of the thesis is to explore corruption prevention and  combating issues in public procurement sphere, to identify the  prevalence rates, determinants and contributing factors and  prevention opportunities in Latvia.  In the first chapter the author analyzes theoretical aspects of  understanding corruption in public procurement, with particular  emphasis on corruption definition problem, its nature, causes and  consequences. A separate section is dedicated to the public  procurement concept, mechanism and legal framework. In the first  part of this work the author presents cognitive methodology of  corruption in public procurement field, based on which the author has  carried out an analysis of corruption situation in public procurement  in Republic of Latvia.  In the second chapter of the thesis, the author analyzes the  problem of corruption in public procurement, including its historical  aspects, typology and classification of corruption subjects involved,  corruption risk elements in public procurement and their  identification. During the development of the second chapter author's  practical experience in public procurements was widely used.  The third and fourth chapter deals with issues related to the  prevention and combating corruption in public procurement, namely  the operation of the concept, principles, methods and techniques,  subjects in Republic of Latvia, as well as an analysis of foreign  experience in preventing and combating corruption. The fifth chapter  is devoted to the corruption prevention and combating perspectives  and their assessment. In this chapter the author has made the  evaluation of corruption prevention and combating measures  efficiency in Republic of Latvia, assessment of anti-corruption  legislation development stage in public procurement field in Latvia. 

Keywords: Prevention of corruption, public purchases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1866
138 Investigating Prostaglandin E2 and Intracellular Oxidative Stress Levels in Lipopolysaccharide-Stimulated RAW 264.7 Macrophages upon Treatment with Strobilanthes crispus

Authors: Anna Pick Kiong Ling, Jia May Chin, Rhun Yian Koh, Ying Pei Wong

Abstract:

Background: Uncontrolled inflammation may cause serious inflammatory diseases if left untreated. Non-steroidal anti-inflammatory drug (NSAIDs) is commonly used to inhibit pro-inflammatory enzymes, thus, reduce inflammation. However, long term administration of NSAIDs leads to various complications. Medicinal plants are getting more attention as it is believed to be more compatible with human body. One of them is a flavonoid-containing medicinal plants, Strobilanthes crispus which has been traditionally claimed to possess anti-inflammatory and antioxidant activities. Nevertheless, its anti-inflammatory activities are yet to be scientifically documented. Objectives: This study aimed to examine the anti-inflammatory activity of S. crispus by investigating its effects on intracellular oxidative stress and prostaglandin E2 (PGE2) levels. Materials and Methods: In this study, the Maximum Non-toxic Dose (MNTD) of methanol extract of both leaves and stems of S. crispus was first determined using 3-(4,5-dimethylthiazolyl-2)-2,5-diphenytetrazolium Bromide (MTT) assay. The effects of S. crispus extracts at MNTD and half MNTD (½MNTD) on intracellular ROS as well as PGE2 levels in 1.0 µg/mL LPS-stimulated RAW 264.7 macrophages were then be measured using DCFH-DA and a competitive enzyme immunoassay kit, respectively. Results: The MNTD of leaf extract was determined as 700µg/mL while for stem was as low as 1.4µg/mL. When LPS-stimulated RAW 264.7 macrophages were subjected to the MNTD of S. crispus leaf extract, both intracellular ROS and PGE2 levels were significantly reduced. In contrast, stem extract at both MNTD and ½MNTD did not significantly reduce the PGE2 level, but significantly increased the intracellular ROS level. Conclusion: The methanol leaf extract of S. crispus may possess anti-inflammatory properties as it is able to significantly reduce the intracellular ROS and PGE2 levels of LPS-stimulated cells. Nevertheless, further studies such as investigating the interleukin, nitric oxide and cytokine tumor necrosis factor-α (TNFα) levels has to be conducted to further confirm the anti-inflammatory properties of S. crispus.

Keywords: Anti-inflammatory, natural products, prostaglandin E2, reactive oxygen species.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490
137 Optimization of the Headspace Solid-Phase Microextraction Gas Chromatography for Volatile Compounds Determination in Phytophthora Cinnamomi Rands

Authors: Rui Qiu, Giles Hardy, Dong Qu, Robert Trengove, Manjree Agarwal, YongLin Ren

Abstract:

Phytophthora cinnamomi (P. c) is a plant pathogenic oomycete that is capable of damaging plants in commercial production systems and natural ecosystems worldwide. The most common methods for the detection and diagnosis of P. c infection are expensive, elaborate and time consuming. This study was carried out to examine whether species specific and life cycle specific volatile organic compounds (VOCs) can be absorbed by solid-phase microextraction fibers and detected by gas chromatography that are produced by P. c and another oomycete Pythium dissotocum. A headspace solid-phase microextraction (HS-SPME) together with gas chromatography (GC) method was developed and optimized for the identification of the VOCs released by P. c. The optimized parameters included type of fiber, exposure time, desorption temperature and desorption time. Optimization was achieved with the analytes of P. c+V8A and V8A alone. To perform the HS-SPME, six types of fiber were assayed and compared: 7μm Polydimethylsiloxane (PDMS), 100μm Polydimethylsiloxane (PDMS), 50/30μm Divinylbenzene/CarboxenTM/Polydimethylsiloxane DVB/CAR/PDMS), 65μm Polydimethylsiloxane/Divinylbenzene (PDMS/DVB), 85μm Polyacrylate (PA) fibre and 85μm CarboxenTM/ Polydimethylsiloxane (Carboxen™/PDMS). In a comparison of the efficacy of the fibers, the bipolar fiber DVB/CAR/PDMS had a higher extraction efficiency than the other fibers. An exposure time of 16h with DVB/CAR/PDMS fiber in the sample headspace was enough to reach the maximum extraction efficiency. A desorption time of 3min in the GC injector with the desorption temperature of 250°C was enough for the fiber to desorb the compounds of interest. The chromatograms and morphology study confirmed that the VOCs from P. c+V8A had distinct differences from V8A alone, as did different life cycle stages of P. c and different taxa such as Pythium dissotocum. The study proved that P. c has species and life cycle specific VOCs, which in turn demonstrated the feasibility of this method as means of

Keywords: Gas chromatography, headspace solid-phase microextraction, optimization, volatile compounds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1866
136 A Comparative Study of the Techno-Economic Performance of the Linear Fresnel Reflector Using Direct and Indirect Steam Generation: A Case Study under High Direct Normal Irradiance

Authors: Ahmed Aljudaya, Derek Ingham, Lin Ma, Kevin Hughes, Mohammed Pourkashanian

Abstract:

Researchers, power companies, and state politicians have given concentrated solar power (CSP) much attention due to its capacity to generate large amounts of electricity whereas overcoming the intermittent nature of solar resources. The Linear Fresnel Reflector (LFR) is a well-known CSP technology type for being inexpensive, having a low land use factor, and suffering from low optical efficiency. The LFR was considered a cost-effective alternative option to the Parabolic Trough Collector (PTC) because of its simplistic design, and this often outweighs its lower efficiency. The LFR power plants commercially generate steam directly and indirectly in order to produce electricity with high technical efficiency and lower its costs. The purpose of this important analysis is to compare the annual performance of the Direct Steam Generation (DSG) and Indirect Steam Generation (ISG) of LFR power plants using molten salt and other different Heat Transfer Fluids (HTF) to investigate their technical and economic effects. A 50 MWe solar-only system is examined as a case study for both steam production methods in extreme weather conditions. In addition, a parametric analysis is carried out to determine the optimal solar field size that provides the lowest Levelized Cost of Electricity (LCOE) while achieving the highest technical performance. As a result of optimizing the optimum solar field size, the solar multiple (SM) is found to be between 1.2 – 1.5 in order to achieve as low as 9 Cent/KWh for the DSG of the LFR. In addition, the power plant is capable of producing around 141 GWh annually and up to 36% of the capacity factor, whereas the ISG produces less energy at a higher cost. The optimization results show that the DSG’s performance overcomes the ISG in producing around 3% more annual energy, 2% lower LCOE, and 28% less capital cost.

Keywords: Concentrated Solar Power, Levelized cost of electricity, Linear Fresnel reflectors, Steam generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152
135 Exploring the Perspective of Service Quality in mHealth Services during the COVID-19 Pandemic

Authors: Wan-I Lee, Nelio Mendoza Figueredo

Abstract:

The impact of COVID-19 has a significant effect on all sectors of society globally. Health information technology (HIT) has become an effective health strategy in this age of distancing. In this regard, Mobile Health (mHealth) plays a critical role in managing patient and provider workflows during the COVID-19 pandemic. Therefore, the users' perception of service quality about mHealth services plays a significant role in shaping confidence and subsequent behaviors regarding the mHealth users' intention of use. This study's objective was to explore levels of user attributes analyzed by a qualitative method of how health practitioners and patients are satisfied or dissatisfied with using mHealth services; and analyzed the users' intention in the context of Taiwan during the COVID-19 pandemic. This research explores the experienced usability of a mHealth services during the Covid-19 pandemic. This study uses qualitative methods that include in-depth and semi-structured interviews that investigate participants' perceptions and experiences and the meanings they attribute to them. The five cases consisted of health practitioners, clinic staff, and patients' experiences using mHealth services. This study encourages participants to discuss issues related to the research question by asking open-ended questions, usually in one-to-one interviews. The findings show the positive and negative attributes of mHealth service quality. Hence, the significant importance of patients' and health practitioners' issues on several dimensions of perceived service quality is system quality, information quality, and interaction quality. A concept map for perceptions regards to emergency uses' intention of mHealth services process is depicted. The findings revealed that users pay more attention to "Medical care", "ease of use" and "utilitarian benefits" and have less importance for "Admissions and Convenience" and "Social influence". To improve mHealth services, the mHealth providers and health practitioners should better manage users' experiences to enhance mHealth services. This research contributes to the understanding of service quality issues in mHealth services during the COVID-19 pandemic.

Keywords: COVID-19, mobile health, mHealth, service quality, use intention.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 656
134 A Study on the Differential Diagnostic Model for Newborn Hearing Loss Screening

Authors: Chun-Lang Chang

Abstract:

According to the statistics, the prevalence of congenital hearing loss in Taiwan is approximately six thousandths; furthermore, one thousandths of infants have severe hearing impairment. Hearing ability during infancy has significant impact in the development of children-s oral expressions, language maturity, cognitive performance, education ability and social behaviors in the future. Although most children born with hearing impairment have sensorineural hearing loss, almost every child more or less still retains some residual hearing. If provided with a hearing aid or cochlear implant (a bionic ear) timely in addition to hearing speech training, even severely hearing-impaired children can still learn to talk. On the other hand, those who failed to be diagnosed and thus unable to begin hearing and speech rehabilitations on a timely manner might lose an important opportunity to live a complete and healthy life. Eventually, the lack of hearing and speaking ability will affect the development of both mental and physical functions, intelligence, and social adaptability. Not only will this problem result in an irreparable regret to the hearing-impaired child for the life time, but also create a heavy burden for the family and society. Therefore, it is necessary to establish a set of computer-assisted predictive model that can accurately detect and help diagnose newborn hearing loss so that early interventions can be provided timely to eliminate waste of medical resources. This study uses information from the neonatal database of the case hospital as the subjects, adopting two different analysis methods of using support vector machine (SVM) for model predictions and using logistic regression to conduct factor screening prior to model predictions in SVM to examine the results. The results indicate that prediction accuracy is as high as 96.43% when the factors are screened and selected through logistic regression. Hence, the model constructed in this study will have real help in clinical diagnosis for the physicians and actually beneficial to the early interventions of newborn hearing impairment.

Keywords: Data mining, Hearing impairment, Logistic regression analysis, Support vector machines

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1785
133 Corporate Information System Educational Center

Authors: Alquliyev R.M., Kazimov T.H., Mahmudova Sh.C., Mahmudova R.Sh.

Abstract:

The given work is devoted to the description of Information Technologies NAS of Azerbaijan created and successfully maintained in Institute. On the basis of the decision of board of the Supreme Certifying commission at the President of the Azerbaijan Republic and Presidium of National Academy of Sciences of the Azerbaijan Republic, the organization of training courses on Computer Sciences for all post-graduate students and dissertators of the republic, taking of examinations of candidate minima, it was on-line entrusted to Institute of Information Technologies of the National Academy of Sciences of Azerbaijan. Therefore, teaching the computer sciences to post-graduate students and dissertators a scientific - methodological manual on effective application of new information technologies for research works by post-graduate students and dissertators and taking of candidate minima is carried out in the Educational Center. Information and communication technologies offer new opportunities and prospects of their application for teaching and training. The new level of literacy demands creation of essentially new technology of obtaining of scientific knowledge. Methods of training and development, social and professional requirements, globalization of the communicative economic and political projects connected with construction of a new society, depends on a level of application of information and communication technologies in the educational process. Computer technologies develop ideas of programmed training, open completely new, not investigated technological ways of training connected to unique opportunities of modern computers and telecommunications. Computer technologies of training are processes of preparation and transfer of the information to the trainee by means of computer. Scientific and technical progress as well as global spread of the technologies created in the most developed countries of the world is the main proof of the leading role of education in XXI century. Information society needs individuals having modern knowledge. In practice, all technologies, using special technical information means (computer, audio, video) are called information technologies of education.

Keywords: Educational Center, post-graduate, database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1689
132 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: Convolutional neural networks, coffee bean, peaberry, sorting, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492
131 Connotation Reform and Problem Response of Rural Social Relations under the Influence of the Earthquake: With a Review of Wenchuan Decade

Authors: Yanqun Li, Hong Geng

Abstract:

The occurrence of Wenchuan earthquake in 2008 has led to severe damage to the rural areas of Chengdu city, such as the rupture of the social network, the stagnation of economic production and the rupture of living space. The post-disaster reconstruction has become a sustainable issue. As an important link to maintain the order of rural social development, social network should be an important content of post-disaster reconstruction. Therefore, this paper takes rural reconstruction communities in earthquake-stricken areas of Chengdu as the research object and adopts sociological research methods such as field survey, observation and interview to try to understand the transformation of rural social relations network under the influence of earthquake and its impact on rural space. It has found that rural societies under the earthquake generally experienced three phases: the break of stable social relations, the transition of temporary non-normal state, and the reorganization of social networks. The connotation of phased rural social relations also changed accordingly: turn to a new division of labor on the social orientation, turn to a capital flow and redistribution in new production mode on the capital orientation, and turn to relative decentralization after concentration on the spatial dimension. Along with such changes, rural areas have emerged some social issues such as the alienation of competition in the new industry division, the low social connection, the significant redistribution of capital, and the lack of public space. Based on a comprehensive review of these issues, this paper proposes the corresponding response mechanism. First of all, a reasonable division of labor should be established within the villages to realize diversified commodity supply. Secondly, the villages should adjust the industrial type to promote the equitable participation of capital allocation groups. Finally, external public spaces should be added to strengthen the field of social interaction within the communities.

Keywords: Social relations, social support networks, industrial division, capital allocation, public space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 676
130 Material Concepts and Processing Methods for Electrical Insulation

Authors: R. Sekula

Abstract:

Epoxy composites are broadly used as an electrical insulation for the high voltage applications since only such materials can fulfill particular mechanical, thermal, and dielectric requirements. However, properties of the final product are strongly dependent on proper manufacturing process with minimized material failures, as too large shrinkage, voids and cracks. Therefore, application of proper materials (epoxy, hardener, and filler) and process parameters (mold temperature, filling time, filling velocity, initial temperature of internal parts, gelation time), as well as design and geometric parameters are essential features for final quality of the produced components. In this paper, an approach for three-dimensional modeling of all molding stages, namely filling, curing and post-curing is presented. The reactive molding simulation tool is based on a commercial CFD package, and include dedicated models describing viscosity and reaction kinetics that have been successfully implemented to simulate the reactive nature of the system with exothermic effect. Also a dedicated simulation procedure for stress and shrinkage calculations, as well as simulation results are presented in the paper. Second part of the paper is dedicated to recent developments on formulations of functional composites for electrical insulation applications, focusing on thermally conductive materials. Concepts based on filler modifications for epoxy electrical composites have been presented, including the results of the obtained properties. Finally, having in mind tough environmental regulations, in addition to current process and design aspects, an approach for product re-design has been presented focusing on replacement of epoxy material with the thermoplastic one. Such “design-for-recycling” method is one of new directions associated with development of new material and processing concepts of electrical products and brings a lot of additional research challenges. For that, one of the successful products has been presented to illustrate the presented methodology.

Keywords: Curing, epoxy insulation, numerical simulations, recycling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1612
129 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market

Authors: Cristian Păuna

Abstract:

In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.

Keywords: Algorithmic trading, automated investment system, DAX Deutscher Aktienindex.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 671
128 Enhancing Learning for Research Higher Degree Students

Authors: Jenny Hall, Alison Jaquet

Abstract:

Universities’ push toward the production of high quality research is not limited to academic staff and experienced researchers. In this environment of research rich agendas, Higher Degree Research (HDR) students are increasingly expected to engage in the publishing of good quality papers in high impact journals. IFN001: Advanced Information Research Skills (AIRS) is a credit bearing mandatory coursework requirement for Queensland University of Technology (QUT) doctorates. Since its inception in 1989, this unique blended learning program has provided the foundations for new researchers to produce original and innovative research. AIRS was redeveloped in 2012, and has now been evaluated with reference to the university’s strategic research priorities. Our research is the first comprehensive evaluation of the program from the learner perspective. We measured whether the program develops essential transferrable skills and graduate capabilities to ensure best practice in the areas of publishing and data management. In particular, we explored whether AIRS prepares students to be agile researchers with the skills to adapt to different research contexts both within and outside academia. The target group for our study consisted of HDR students and supervisors at QUT. Both quantitative and qualitative research methods were used for data collection. Gathering data was by survey and focus groups with qualitative responses analyzed using NVivo. The results of the survey show that 82% of students surveyed believe that AIRS assisted their research process and helped them learn skills they need as a researcher. The 18% of respondents who expressed reservation about the benefits of AIRS were also examined to determine the key areas of concern. These included trends related to the timing of the program early in the candidature and a belief among some students that their previous research experience was sufficient for postgraduate study. New insights have been gained into how to better support HDR learners in partnership with supervisors and how to enhance learning experiences of specific cohorts, including international students and mature learners.

Keywords: Data management, enhancing learning experience, publishing, research higher degree students.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1449
127 Perception of Predictive Confounders for the Prevalence of Hypertension among Iraqi Population: A Pilot Study

Authors: Zahraa Albasry, Hadeel D. Najim, Anmar Al-Taie

Abstract:

Background: Hypertension is considered as one of the most important causes of cardiovascular complications and one of the leading causes of worldwide mortality. Identifying the potential risk factors associated with this medical health problem plays an important role in minimizing its incidence and related complications. The objective of this study is to explore the prevalence of receptor sensitivity regarding assess and understand the perception of specific predictive confounding factors on the prevalence of hypertension (HT) among a sample of Iraqi population in Baghdad, Iraq. Materials and Methods: A randomized cross sectional study was carried out on 100 adult subjects during their visit to the outpatient clinic at a certain sector of Baghdad Province, Iraq. Demographic, clinical and health records alongside specific screening and laboratory tests of the participants were collected and analyzed to detect the potential of confounding factors on the prevalence of HT. Results: 63% of the study participants suffered from HT, most of them were female patients (P < 0.005). Patients aged between 41-50 years old significantly suffered from HT than other age groups (63.5%, P < 0.001). 88.9% of the participants were obese (P < 0.001) and 47.6% had diabetes with HT. Positive family history and sedentary lifestyle were significantly higher among all hypertensive groups (P < 0.05). High salt and fatty food intake was significantly found among patients suffered from isolated systolic hypertension (ISHT) (P < 0.05). A significant positive correlation between packed cell volume (PCV) and systolic blood pressure (SBP) (r = 0.353, P = 0.048) found among normotensive participants. Among hypertensive patients, a positive significant correlation found between triglycerides (TG) and both SBP (r = 0.484, P = 0.031) and diastolic blood pressure (DBP) (r = 0.463, P = 0.040), while low density lipoprotein-cholesterol (LDL-c) showed a positive significant correlation with DBP (r = 0.443, P = 0.021). Conclusion: The prevalence of HT among Iraqi populations is of major concern. Further consideration is required to detect the impact of potential risk factors and to minimize blood pressure (BP) elevation and reduce the risk of other cardiovascular complications later in life.

Keywords: Correlation, hypertension, Iraq, risk factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 886
126 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction

Authors: Talal Alsulaiman, Khaldoun Khashanah

Abstract:

In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent’s attributes. Also, the influence of social networks in the developing of agents interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.

Keywords: Artificial stock markets, agent based simulation, bounded rationality, behavioral finance, artificial neural network, interaction, scale-free networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2506
125 Genotypic and Allelic Distribution of Polymorphic Variants of Gene SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) and Their Association to the Clinical Response to Metformin in Adult Pakistani T2DM Patients

Authors: Sadaf Moeez, Madiha Khalid, Zoya Khalid, Sania Shaheen, Sumbul Khalid

Abstract:

Background: Inter-individual variation in response to metformin, which has been considered as a first line therapy for T2DM treatment is considerable. In the current study, it was aimed to investigate the impact of two genetic variants Leu125Phe (rs77474263) and Gly64Asp (rs77630697) in gene SLC47A1 on the clinical efficacy of metformin in T2DM Pakistani patients. Methods: The study included 800 T2DM patients (400 metformin responders and 400 metformin non-responders) along with 400 ethnically matched healthy individuals. The genotypes were determined by allele-specific polymerase chain reaction. In-silico analysis was done to confirm the effect of the two SNPs on the structure of genes. Association was statistically determined using SPSS software. Results: Minor allele frequency for rs77474263 and rs77630697 was 0.13 and 0.12. For SLC47A1 rs77474263 the homozygotes of one mutant allele ‘T’ (CT) of rs77474263 variant were fewer in metformin responders than metformin non-responders (29.2% vs. 35.5 %). Likewise, the efficacy was further reduced (7.2% vs. 4.0 %) in homozygotes of two copies of ‘T’ allele (TT). Remarkably, T2DM cases with two copies of allele ‘C’ (CC) had 2.11 times more probability to respond towards metformin monotherapy. For SLC47A1 rs77630697 the homozygotes of one mutant allele ‘A’ (GA) of rs77630697 variant were fewer in metformin responders than metformin non-responders (33.5% vs. 43.0 %). Likewise, the efficacy was further reduced (8.5% vs. 4.5%) in homozygotes of two copies of ‘A’ allele (AA). Remarkably, T2DM cases with two copies of allele ‘G’ (GG) had 2.41 times more probability to respond towards metformin monotherapy. In-silico analysis revealed that these two variants affect the structure and stability of their corresponding proteins. Conclusion: The present data suggest that SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) polymorphisms were associated with the therapeutic response of metformin in T2DM patients of Pakistan.

Keywords: Diabetes, T2DM, SLC47A1, Pakistan, polymorphism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 711
124 A Comprehensive Survey on Machine Learning Techniques and User Authentication Approaches for Credit Card Fraud Detection

Authors: Niloofar Yousefi, Marie Alaghband, Ivan Garibay

Abstract:

With the increase of credit card usage, the volume of credit card misuse also has significantly increased, which may cause appreciable financial losses for both credit card holders and financial organizations issuing credit cards. As a result, financial organizations are working hard on developing and deploying credit card fraud detection methods, in order to adapt to ever-evolving, increasingly sophisticated defrauding strategies and identifying illicit transactions as quickly as possible to protect themselves and their customers. Compounding on the complex nature of such adverse strategies, credit card fraudulent activities are rare events compared to the number of legitimate transactions. Hence, the challenge to develop fraud detection that are accurate and efficient is substantially intensified and, as a consequence, credit card fraud detection has lately become a very active area of research. In this work, we provide a survey of current techniques most relevant to the problem of credit card fraud detection. We carry out our survey in two main parts. In the first part, we focus on studies utilizing classical machine learning models, which mostly employ traditional transnational features to make fraud predictions. These models typically rely on some static physical characteristics, such as what the user knows (knowledge-based method), or what he/she has access to (object-based method). In the second part of our survey, we review more advanced techniques of user authentication, which use behavioral biometrics to identify an individual based on his/her unique behavior while he/she is interacting with his/her electronic devices. These approaches rely on how people behave (instead of what they do), which cannot be easily forged. By providing an overview of current approaches and the results reported in the literature, this survey aims to drive the future research agenda for the community in order to develop more accurate, reliable and scalable models of credit card fraud detection.

Keywords: credit card fraud detection, user authentication, behavioral biometrics, machine learning, literature survey

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 496
123 Exploratory Tests of Crude Bacteriocins from Autochthonous Lactic Acid Bacteria against Food-Borne Pathogens and Spoilage Bacteria

Authors: M. Naimi, M. B. Khaled

Abstract:

The aim of the present work was to test in vitro inhibition of food pathogens and spoilage bacteria by crude bacteriocins from autochthonous lactic acid bacteria. Thirty autochthonous lactic acid bacteria isolated previously, belonging to the genera: Lactobacillus, Carnobacterium, Lactococcus, Vagococcus, Streptococcus, and Pediococcus, have been screened by an agar spot test and a well diffusion assay against Gram-positive and Gram-negative harmful bacteria: Bacillus cereus, Bacillus subtilis ATCC 6633, Escherichia coli ATCC 8739, Salmonella typhimurium ATCC 14028, Staphylococcus aureus ATCC 6538, and Pseudomonas aeruginosa under conditions means to reduce lactic acid and hydrogen peroxide effect to select bacteria with high bacteriocinogenic potential. Furthermore, crude bacteriocins semiquantification and heat sensitivity to different temperatures (80, 95, 110°C, and 121°C) were performed. Another exploratory test concerning the response of St. aureus ATCC 6538 to the presence of crude bacteriocins was realized. It has been observed by the agar spot test that fifteen candidates were active toward Gram-positive targets strains. The secondary screening demonstrated an antagonistic activity oriented only against St. aureus ATCC 6538, leading to the selection of five isolates: Lm14, Lm21, Lm23, Lm24, and Lm25 with a larger inhibition zone compared to the others. The ANOVA statistical analysis reveals a small variation of repeatability: Lm21: 0.56%, Lm23: 0%, Lm25: 1.67%, Lm14: 1.88%, Lm24: 2.14%. Conversely, slight variation was reported in terms of inhibition diameters: 9.58± 0.40, 9.83± 0.46 and 10.16± 0.24 8.5 ± 0.40 10 mm for, Lm21, Lm23, Lm25, Lm14and Lm24, indicating that the observed potential showed a heterogeneous distribution (BMS = 0.383, WMS = 0.117). The repeatability coefficient calculated displayed 7.35%. As for the bacteriocins semiquantification, the five samples exhibited production amounts about 4.16 for Lm21, Lm23, Lm25 and 2.08 AU/ml for Lm14, Lm24. Concerning the sensitivity the crude bacteriocins were fully insensitive to heat inactivation, until 121°C, they preserved the same inhibition diameter. As to, kinetic of growth , the µmax showed reductions in pathogens load for Lm21, Lm23, Lm25, Lm14, Lm24 of about 42.92%, 84.12%, 88.55%, 54.95%, 29.97% in the second trails. Inversely, this pathogen growth after five hours displayed differences of 79.45%, 12.64%, 11.82%, 87.88%, 85.66% in the second trails, compared to the control. This study showed potential inhibition to the growth of this food pathogen, suggesting the possibility to improve the hygienic food quality.

Keywords: Exploratory test, lactic acid bacteria, crude bacteriocins, spoilage, pathogens.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2330
122 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Authors: Alexander Buhr, Klaus Ehrenfried

Abstract:

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Keywords: Boundary layer, high-speed PIV, ICE3, moving train model, roughness elements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1508
121 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups

Authors: Lily Ingsrisawang, Tasanee Nacharoen

Abstract:

The problems arising from unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many researchers have found that the performance of existing classifiers tends to be biased towards the majority class. The k-nearest neighbors’ nonparametric discriminant analysis is a method that was proposed for classifying unbalanced classes with good performance. In this study, the methods of discriminant analysis are of interest in investigating misclassification error rates for classimbalanced data of three diabetes risk groups. The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification of class-imbalanced data of diabetes risk groups. Data from a project maintaining healthy conditions for 599 employees of a government hospital in Bangkok were obtained for the classification problem. The employees were divided into three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data including the variables of diabetes risk group, age, gender, blood glucose, and BMI were analyzed and bootstrapped for 50 and 100 samples, 599 observations per sample, for additional estimation of the misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples showed nonnormality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. Searching the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions of (0.90:0.05:0.05), (0.80: 0.10: 0.10) and (0.70, 0.15, 0.15). The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k=3 or k=4 and the defined prior probabilities of non-risk: risk: diabetic as 0.90: 0.05:0.05 or 0.80:0.10:0.10 gave the smallest error rate of misclassification. The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.

Keywords: Bootstrap, diabetes risk groups, error rate, k-nearest neighbors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1987
120 Multiphase Flow Regime Detection Algorithm for Gas-Liquid Interface Using Ultrasonic Pulse-Echo Technique

Authors: Serkan Solmaz, Jean-Baptiste Gouriet, Nicolas Van de Wyer, Christophe Schram

Abstract:

Efficiency of the cooling process for cryogenic propellant boiling in engine cooling channels on space applications is relentlessly affected by the phase change occurs during the boiling. The effectiveness of the cooling process strongly pertains to the type of the boiling regime such as nucleate and film. Geometric constraints like a non-transparent cooling channel unable to use any of visualization methods. The ultrasonic (US) technique as a non-destructive method (NDT) has therefore been applied almost in every engineering field for different purposes. Basically, the discontinuities emerge between mediums like boundaries among different phases. The sound wave emitted by the US transducer is both transmitted and reflected through a gas-liquid interface which makes able to detect different phases. Due to the thermal and structural concerns, it is impractical to sustain a direct contact between the US transducer and working fluid. Hence the transducer should be located outside of the cooling channel which results in additional interfaces and creates ambiguities on the applicability of the present method. In this work, an exploratory research is prompted so as to determine detection ability and applicability of the US technique on the cryogenic boiling process for a cooling cycle where the US transducer is taken place outside of the channel. Boiling of the cryogenics is a complex phenomenon which mainly brings several hindrances for experimental protocol because of thermal properties. Thus substitute materials are purposefully selected based on such parameters to simplify experiments. Aside from that, nucleate and film boiling regimes emerging during the boiling process are simply simulated using non-deformable stainless steel balls, air-bubble injection apparatuses and air clearances instead of conducting a real-time boiling process. A versatile detection algorithm is perennially developed concerning exploratory studies afterward. According to the algorithm developed, the phases can be distinguished 99% as no-phase, air-bubble, and air-film presences. The results show the detection ability and applicability of the US technique for an exploratory purpose.

Keywords: Ultrasound, ultrasonic, multiphase flow, boiling, cryogenics, detection algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 971
119 Self-Sensing Concrete Nanocomposites for Smart Structures

Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi

Abstract:

In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.

Keywords: Carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3422
118 The Effect of Information vs. Reasoning Gap Tasks on the Frequency of Conversational Strategies and Accuracy in Speaking among Iranian Intermediate EFL Learners

Authors: Hooriya Sadr Dadras, Shiva Seyed Erfani

Abstract:

Speaking skills merit meticulous attention both on the side of the learners and the teachers. In particular, accuracy is a critical component to guarantee the messages to be conveyed through conversation because a wrongful change may adversely alter the content and purpose of the talk. Different types of tasks have served teachers to meet numerous educational objectives. Besides, negotiation of meaning and the use of different strategies have been areas of concern in socio-cultural theories of SLA. Negotiation of meaning is among the conversational processes which have a crucial role in facilitating the understanding and expression of meaning in a given second language. Conversational strategies are used during interaction when there is a breakdown in communication that leads to the interlocutor attempting to remedy the gap through talk. Therefore, this study was an attempt to investigate if there was any significant difference between the effect of reasoning gap tasks and information gap tasks on the frequency of conversational strategies used in negotiation of meaning in classrooms on one hand, and on the accuracy in speaking of Iranian intermediate EFL learners on the other. After a pilot study to check the practicality of the treatments, at the outset of the main study, the Preliminary English Test was administered to ensure the homogeneity of 87 out of 107 participants who attended the intact classes of a 15 session term in one control and two experimental groups. Also, speaking sections of PET were used as pretest and posttest to examine their speaking accuracy. The tests were recorded and transcribed to estimate the percentage of the number of the clauses with no grammatical errors in the total produced clauses to measure the speaking accuracy. In all groups, the grammatical points of accuracy were instructed and the use of conversational strategies was practiced. Then, different kinds of reasoning gap tasks (matchmaking, deciding on the course of action, and working out a time table) and information gap tasks (restoring an incomplete chart, spot the differences, arranging sentences into stories, and guessing game) were manipulated in experimental groups during treatment sessions, and the students were required to practice conversational strategies when doing speaking tasks. The conversations throughout the terms were recorded and transcribed to count the frequency of the conversational strategies used in all groups. The results of statistical analysis demonstrated that applying both the reasoning gap tasks and information gap tasks significantly affected the frequency of conversational strategies through negotiation. In the face of the improvements, the reasoning gap tasks had a more significant impact on encouraging the negotiation of meaning and increasing the number of conversational frequencies every session. The findings also indicated both task types could help learners significantly improve their speaking accuracy. Here, applying the reasoning gap tasks was more effective than the information gap tasks in improving the level of learners’ speaking accuracy.

Keywords: Accuracy in speaking, conversational strategies, information gap tasks, reasoning gap tasks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1143
117 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling

Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal

Abstract:

Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.

Keywords: Benchmark collection, program educational objectives, student outcomes, ABET, Accreditation, machine learning, supervised multiclass classification, text mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 809
116 Structural Parsing of Natural Language Text in Tamil Using Phrase Structure Hybrid Language Model

Authors: Selvam M, Natarajan. A M, Thangarajan R

Abstract:

Parsing is important in Linguistics and Natural Language Processing to understand the syntax and semantics of a natural language grammar. Parsing natural language text is challenging because of the problems like ambiguity and inefficiency. Also the interpretation of natural language text depends on context based techniques. A probabilistic component is essential to resolve ambiguity in both syntax and semantics thereby increasing accuracy and efficiency of the parser. Tamil language has some inherent features which are more challenging. In order to obtain the solutions, lexicalized and statistical approach is to be applied in the parsing with the aid of a language model. Statistical models mainly focus on semantics of the language which are suitable for large vocabulary tasks where as structural methods focus on syntax which models small vocabulary tasks. A statistical language model based on Trigram for Tamil language with medium vocabulary of 5000 words has been built. Though statistical parsing gives better performance through tri-gram probabilities and large vocabulary size, it has some disadvantages like focus on semantics rather than syntax, lack of support in free ordering of words and long term relationship. To overcome the disadvantages a structural component is to be incorporated in statistical language models which leads to the implementation of hybrid language models. This paper has attempted to build phrase structured hybrid language model which resolves above mentioned disadvantages. In the development of hybrid language model, new part of speech tag set for Tamil language has been developed with more than 500 tags which have the wider coverage. A phrase structured Treebank has been developed with 326 Tamil sentences which covers more than 5000 words. A hybrid language model has been trained with the phrase structured Treebank using immediate head parsing technique. Lexicalized and statistical parser which employs this hybrid language model and immediate head parsing technique gives better results than pure grammar and trigram based model.

Keywords: Hybrid Language Model, Immediate Head Parsing, Lexicalized and Statistical Parsing, Natural Language Processing, Parts of Speech, Probabilistic Context Free Grammar, Tamil Language, Tree Bank.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3621
115 Evaluating the Performance of Organic, Inorganic and Liquid Sheep Manure on Growth, Yield and Nutritive Value of Hybrid Napier CO-3

Authors: F. A. M. Safwan, H. N. N. Dilrukshi, P. U. S. Peiris

Abstract:

Less availability of high quality green forages leads to low productivity of national dairy herd of Sri Lanka. Growing grass and fodder to suit the production system is an efficient and economical solution for this problem. CO-3 is placed in a higher category, especially on tillering capacity, green forage yield, regeneration capacity, leaf to stem ratio, high crude protein content, resistance to pests and diseases and free from adverse factors along with other fodder varieties grown within the country. An experiment was designed to determine the effect of organic sheep manure, inorganic fertilizers and liquid sheep manure on growth, yield and nutritive value of CO-3. The study was consisted with three treatments; sheep manure (T1), recommended inorganic fertilizers (T2) and liquid sheep manure (T3) which was prepared using bucket fermentation method and each treatment was consisted with three replicates and those were assigned randomly. First harvest was obtained after 40 days of plant establishment and number of leaves (NL), leaf area (LA), tillering capacity (TC), fresh weight (FW) and dry weight (DW) were recorded and second harvest was obtained after 30 days of first harvest and same set of data were recorded. SPSS 16 software was used for data analysis. For proximate analysis AOAC, 2000 standard methods were used. Results revealed that the plants treated with T1 recorded highest NL, LA, TC, FW and DW and were statistically significant at first and second harvest of CO-3 (p˂ 0.05) and it was found that T1 was statistically significant from T2 and T3. Although T3 was recorded higher than the T2 in almost all growth parameters; it was not statistically significant (p ˃0.05). In addition, the crude protein content was recorded highest in T1 with the value of 18.33±1.61 and was lowest in T2 with the value of 10.82±1.14 and was statistically significant (p˂ 0.05). Apart from this, other proximate composition crude fiber, crude fat, ash, moisture content and dry matter were not statistically significant between treatments (p ˃0.05). In accordance with the results, it was found that the organic fertilizer is the best fertilizer for CO-3 in terms of growth parameters and crude protein content.

Keywords: Fertilizer, growth parameters, Hybrid Napier CO-3, proximate composition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1356
114 Designing a Socio-Technical System for Groundwater Resources Management, Applying Smart Energy and Water Meter

Authors: S. Mahdi Sadatmansouri, Maryam Khalili

Abstract:

World, nowadays, encounters serious water scarcity problem. During the past few years, by advent of Smart Energy and Water Meter (SEWM) and its installation at the electro-pumps of the water wells, one had believed that it could be the golden key to address the groundwater resources over-pumping issue. In fact, implementation of these Smart Meters managed to control the water table drawdown for short; but it was not a sustainable approach. SEWM has been considered as law enforcement facility at first; however, for solving a complex socioeconomic problem like shared groundwater resources management, more than just enforcement is required: participation to conserve common resources. The well owners or farmers, as water consumers, are the main and direct stakeholders of this system and other stakeholders could be government sectors, investors, technology providers, privet sectors or ordinary people. Designing a socio-technical system not only defines the role of each stakeholder but also can lubricate the communication to reach the system goals while benefits of each are considered and provided. Farmers, as the key participators for solving groundwater problem, do not trust governments but they would trust a fair system in which responsibilities, privileges and benefits are clear. Technology could help this system remained impartial and productive. Social aspects provide rules, regulations, social objects and etc. for the system and help it to be more human-centered. As the design methodology, Design Thinking provides probable solutions for the challenging problems and ongoing conflicts; it could enlighten the way in which the final system could be designed. Using Human Centered Design approach of IDEO helps to keep farmers in the center of the solution and provides a vision by which stakeholders’ requirements and needs are addressed effectively. Farmers would be considered to trust the system and participate in their groundwater resources management if they find the rules and tools of the system fair and effective. Besides, implementation of the socio-technical system could change farmers’ behavior in order that they concern more about their valuable shared water resources as well as their farm profit. This socio-technical system contains nine main subsystems: 1) Measurement and Monitoring system, 2) Legislation and Governmental system, 3) Information Sharing system, 4) Knowledge based NGOs, 5) Integrated Farm Management system (using IoT), 6) Water Market and Water Banking system, 7) Gamification, 8) Agribusiness ecosystem, 9) Investment system.

Keywords: Design Thinking, Human Centered Design, participatory management, Smart Energy and Water Meter (SEWM), socio-technical system, water table drawdown, Internet of Things, Gamification

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 770
113 Catalytic Pyrolysis of Sewage Sludge for Upgrading Bio-Oil Quality Using Sludge-Based Activated Char as an Alternative to HZSM5

Authors: Ali Zaker, Zhi Chen

Abstract:

Due to the concerns about the depletion of fossil fuel sources and the deteriorating environment, the attempt to investigate the production of renewable energy will play a crucial role as a potential to alleviate the dependency on mineral fuels. One particular area of interest is generation of bio-oil through sewage sludge (SS) pyrolysis. SS can be a potential candidate in contrast to other types of biomasses due to its availability and low cost. However, the presence of high molecular weight hydrocarbons and oxygenated compounds in the SS bio-oil hinders some of its fuel applications. In this context, catalytic pyrolysis is another attainable route to upgrade bio-oil quality. Among different catalysts (i.e., zeolites) studied for SS pyrolysis, activated chars (AC) are eco-friendly alternatives. The beneficial features of AC derived from SS comprise the comparatively large surface area, porosity, enriched surface functional groups and presence of a high amount of metal species that can improve the catalytic activity. Hence, a sludge-based AC catalyst was fabricated in a single-step pyrolysis reaction with NaOH as the activation agent and was compared with HZSM5 zeolite in this study. The thermal decomposition and kinetics were invested via thermogravimetric analysis (TGA) for guidance and control of pyrolysis and catalytic pyrolysis and the design of the pyrolysis setup. The results indicated that the pyrolysis and catalytic pyrolysis contain four obvious stages and the main decomposition reaction occurred in the range of 200-600 °C. Coats-Redfern method was applied in the 2nd and 3rd devolatilization stages to estimate the reaction order and activation energy (E) from the mass loss data. The average activation energy (Em) values for the reaction orders n = 1, 2 and 3 were in the range of 6.67-20.37 kJ/mol for SS; 1.51-6.87 kJ/mol for HZSM5; and 2.29-9.17 kJ/mol for AC, respectively. According to the results, AC and HZSM5 both were able to improve the reaction rate of SS pyrolysis by abridging the Em value. Moreover, to generate and examine the effect of the catalysts on the quality of bio-oil, a fixed-bed pyrolysis system was designed and implemented. The composition analysis of the produced bio-oil was carried out via gas chromatography/mass spectrometry (GC/MS). The selected SS to catalyst ratios were 1:1, 2:1 and 4:1. The optimum ratio in terms of cracking the long-chain hydrocarbons and removing oxygen-containing compounds was 1:1 for both catalysts. The upgraded bio-oils with HZSM5 and AC were in the total range of C4-C17 with around 72% in the range of C4-C9. The bio-oil from pyrolysis of SS contained 49.27% oxygenated compounds while the presence of HZSM5 and AC dropped to 7.3% and 13.02%, respectively. Meanwhile, generation of value-added chemicals such as light aromatic compounds were significantly improved in the catalytic process. Furthermore, the fabricated AC catalyst was characterized by BET, SEM-EDX, FT-IR and TGA techniques. Overall, this research demonstrated that AC is an efficient catalyst in the pyrolysis of SS and can be used as a cost-competitive catalyst in contrast to HZSM5.

Keywords: Activated char, bio-oil, catalytic pyrolysis, HZSM5, sewage sludge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 666
112 Application of Gamma Frailty Model in Survival of Liver Cirrhosis Patients

Authors: Elnaz Saeedi, Jamileh Abolaghasemi, Mohsen Nasiri Tousi, Saeedeh Khosravi

Abstract:

Goals and Objectives: A typical analysis of survival data involves the modeling of time-to-event data, such as the time till death. A frailty model is a random effect model for time-to-event data, where the random effect has a multiplicative influence on the baseline hazard function. This article aims to investigate the use of gamma frailty model with concomitant variable in order to individualize the prognostic factors that influence the liver cirrhosis patients’ survival times. Methods: During the one-year study period (May 2008-May 2009), data have been used from the recorded information of patients with liver cirrhosis who were scheduled for liver transplantation and were followed up for at least seven years in Imam Khomeini Hospital in Iran. In order to determine the effective factors for cirrhotic patients’ survival in the presence of latent variables, the gamma frailty distribution has been applied. In this article, it was considering the parametric model, such as Exponential and Weibull distributions for survival time. Data analysis is performed using R software, and the error level of 0.05 was considered for all tests. Results: 305 patients with liver cirrhosis including 180 (59%) men and 125 (41%) women were studied. The age average of patients was 39.8 years. At the end of the study, 82 (26%) patients died, among them 48 (58%) were men and 34 (42%) women. The main cause of liver cirrhosis was found hepatitis 'B' with 23%, followed by cryptogenic with 22.6% were identified as the second factor. Generally, 7-year’s survival was 28.44 months, for dead patients and for censoring was 19.33 and 31.79 months, respectively. Using multi-parametric survival models of progressive and regressive, Exponential and Weibull models with regard to the gamma frailty distribution were fitted to the cirrhosis data. In both models, factors including, age, bilirubin serum, albumin serum, and encephalopathy had a significant effect on survival time of cirrhotic patients. Conclusion: To investigate the effective factors for the time of patients’ death with liver cirrhosis in the presence of latent variables, gamma frailty model with parametric distributions seems desirable.

Keywords: Frailty model, latent variables, liver cirrhosis, parametric distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1035
111 Developing Optical Sensors with Application of Cancer Detection by Elastic Light Scattering Spectroscopy

Authors: May Fadheel Estephan, Richard Perks

Abstract:

Cancer is a serious health concern that affects millions of people worldwide. Early detection and treatment are essential for improving patient outcomes. However, current methods for cancer detection have limitations, such as low sensitivity and specificity. The aim of this study was to develop an optical sensor for cancer detection using elastic light scattering spectroscopy (ELSS). ELSS is a non-invasive optical technique that can be used to characterize the size and concentration of particles in a solution. An optical probe was fabricated with a 100-μm-diameter core and a 132-μm centre-to-centre separation. The probe was used to measure the ELSS spectra of polystyrene spheres with diameters of 2 μm, 0.8 μm, and 0.413 μm. The spectra were then analysed to determine the size and concentration of the spheres. The results showed that the optical probe was able to differentiate between the three different sizes of polystyrene spheres. The probe was also able to detect the presence of polystyrene spheres in suspension concentrations as low as 0.01%. The results of this study demonstrate the potential of ELSS for cancer detection. ELSS is a non-invasive technique that can be used to characterize the size and concentration of cells in a tissue sample. This information can be used to identify cancer cells and assess the stage of the disease. The data for this study were collected by measuring the ELSS spectra of polystyrene spheres with different diameters. The spectra were collected using a spectrometer and a computer. The ELSS spectra were analysed using a software program to determine the size and concentration of the spheres. The software program used a mathematical algorithm to fit the spectra to a theoretical model. The question addressed by this study was whether ELSS could be used to detect cancer cells. The results of the study showed that ELSS could be used to differentiate between different sizes of cells, suggesting that it could be used to detect cancer cells. The findings of this research show the utility of ELSS in the early identification of cancer. ELSS is a non-invasive method for characterizing the number and size of cells in a tissue sample. To determine cancer cells and determine the disease's stage, this information can be employed. Further research is needed to evaluate the clinical performance of ELSS for cancer detection.

Keywords: Elastic Light Scattering Spectroscopy, Polystyrene spheres in suspension, optical probe, fibre optics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 63
110 Education and Assessment of Civil Employees in e-Government: The Case of a Moodle Based Platform

Authors: Stamatios A. Theocharis, George A. Tsihrintzis

Abstract:

One of the most important factors for the success of e-government is training and preparing the workforce of the public sector. As changes and innovation in the public sector progress at a very slow pace and more slowly than in the private sector, issues related to human resources require special care. This is because the workforce will eventually seize the opportunities of the technological solutions used in e-Government. Thus, the central administration should provide employees with continuous and focused training not only on new technologies but also on a wide range of subjects and also improve interdepartmental interaction.

To achieve all this, new methods and training tools need to be implemented in addition to assessment of the employees. In this spirit, we propose the development of an educational platform with user personalization features. We propose the development of this platform using Moodle as the basic tool. Incorporating a personalization mechanism is very important since different employees have different backgrounds, education levels, computer skills, or different capability to develop further. Key features of the proposed platform include, besides typical e-learning tools, communities organized in order to exchange experiences and knowledge, groups of users based on certain criteria, automatic evaluation of users and potential self-education and self-assessment. In its fully developed form, this platform can be part of a more comprehensive knowledge management system for the public sector.

Keywords: e-Government, civil employees education, education technologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1913