Search results for: force field methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6745

Search results for: force field methods

115 Weaving Social Development: An Exploratory Study of Adapting Traditional Textiles Using Indigenous Organic Wool for the Modern Interior Textiles Market

Authors: Seema Singh, Puja Anand, Alok Bhasin

Abstract:

The interior design profession aims to create aesthetically pleasing design solutions for human habitats but of late, growing awareness about depleting environmental resources, both tangible and intangible, and damages to the eco-system led to the quest for creating healthy and sustainable interior environments. The paper proposes adapting traditionally produced organic wool textiles for the mainstream interior design industry. This can create sustainable livelihoods whereby eco-friendly bridges can be built between Interior designers and consumers and pastoral communities. This study focuses on traditional textiles produced by two pastoral communities from India that use organic wool from indigenous sheep varieties. The Gaddi communities of Himachal Pradesh use wool from the Gaddi sheep breed to create Pattu (a multi-purpose textile). The Kurumas of Telangana weave a blanket called the Gongadi, using wool from the Black Deccani variety of sheep. These communities have traditionally reared indigenous sheep breeds for their wool and produce hand-spun and hand-woven textiles for their own consumption, using traditional processes that are chemical free. Based on data collected personally from field visits and documentation of traditional crafts of these pastoral communities, and using traditionally produced indigenous organic wool, the authors have developed innovative textile samples by including design interventions and exploring dyeing and weaving techniques. As part of the secondary research, the role of pastoralism in sustaining the eco-systems of Himachal Pradesh and Telangana was studied, and also the role of organic wool in creating healthy interior environments. The authors found that natural wool from indigenous sheep breeds can be used to create interior textiles that have the potential to be marketed to an urban audience, and this will help create earnings for pastoral communities. Literature studies have shown that organic & sustainable wool can reduce indoor pollution & toxicity levels in interiors and further help in creating healthier interior environments. Revival of indigenous breeds of sheep can further help in rejuvenating dying crafts, and promotion of these indigenous textiles can help in sustaining traditional eco-systems and the pastoral communities whose way of life is endangered today. Based on research and findings, the authors propose that adapting traditional textiles can have potential for application in Interiors, creating eco-friendly spaces. Interior textiles produced through such sustainable processes can help reduce indoor pollution, give livelihood opportunities to traditional economies, and leave almost zero carbon foot-print while being in sync with available natural resources, hence ultimately benefiting the society. The win-win situation for all the stakeholders in this eco-friendly model makes it pertinent to re-think how we design lifestyle textiles for interiors. This study illustrates a specific example from the two pastoral communities and can be used as a model that can work equally well in any community, regardless of geography.

Keywords: Design Intervention, Eco-Friendly, Healthy Interiors, Indigenous, Organic Wool, Pastoralism, Sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1374
114 Investigating Prostaglandin E2 and Intracellular Oxidative Stress Levels in Lipopolysaccharide-Stimulated RAW 264.7 Macrophages upon Treatment with Strobilanthes crispus

Authors: Anna Pick Kiong Ling, Jia May Chin, Rhun Yian Koh, Ying Pei Wong

Abstract:

Background: Uncontrolled inflammation may cause serious inflammatory diseases if left untreated. Non-steroidal anti-inflammatory drug (NSAIDs) is commonly used to inhibit pro-inflammatory enzymes, thus, reduce inflammation. However, long term administration of NSAIDs leads to various complications. Medicinal plants are getting more attention as it is believed to be more compatible with human body. One of them is a flavonoid-containing medicinal plants, Strobilanthes crispus which has been traditionally claimed to possess anti-inflammatory and antioxidant activities. Nevertheless, its anti-inflammatory activities are yet to be scientifically documented. Objectives: This study aimed to examine the anti-inflammatory activity of S. crispus by investigating its effects on intracellular oxidative stress and prostaglandin E2 (PGE2) levels. Materials and Methods: In this study, the Maximum Non-toxic Dose (MNTD) of methanol extract of both leaves and stems of S. crispus was first determined using 3-(4,5-dimethylthiazolyl-2)-2,5-diphenytetrazolium Bromide (MTT) assay. The effects of S. crispus extracts at MNTD and half MNTD (½MNTD) on intracellular ROS as well as PGE2 levels in 1.0 µg/mL LPS-stimulated RAW 264.7 macrophages were then be measured using DCFH-DA and a competitive enzyme immunoassay kit, respectively. Results: The MNTD of leaf extract was determined as 700µg/mL while for stem was as low as 1.4µg/mL. When LPS-stimulated RAW 264.7 macrophages were subjected to the MNTD of S. crispus leaf extract, both intracellular ROS and PGE2 levels were significantly reduced. In contrast, stem extract at both MNTD and ½MNTD did not significantly reduce the PGE2 level, but significantly increased the intracellular ROS level. Conclusion: The methanol leaf extract of S. crispus may possess anti-inflammatory properties as it is able to significantly reduce the intracellular ROS and PGE2 levels of LPS-stimulated cells. Nevertheless, further studies such as investigating the interleukin, nitric oxide and cytokine tumor necrosis factor-α (TNFα) levels has to be conducted to further confirm the anti-inflammatory properties of S. crispus.

Keywords: Anti-inflammatory, natural products, prostaglandin E2, reactive oxygen species.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1501
113 Optimization of the Headspace Solid-Phase Microextraction Gas Chromatography for Volatile Compounds Determination in Phytophthora Cinnamomi Rands

Authors: Rui Qiu, Giles Hardy, Dong Qu, Robert Trengove, Manjree Agarwal, YongLin Ren

Abstract:

Phytophthora cinnamomi (P. c) is a plant pathogenic oomycete that is capable of damaging plants in commercial production systems and natural ecosystems worldwide. The most common methods for the detection and diagnosis of P. c infection are expensive, elaborate and time consuming. This study was carried out to examine whether species specific and life cycle specific volatile organic compounds (VOCs) can be absorbed by solid-phase microextraction fibers and detected by gas chromatography that are produced by P. c and another oomycete Pythium dissotocum. A headspace solid-phase microextraction (HS-SPME) together with gas chromatography (GC) method was developed and optimized for the identification of the VOCs released by P. c. The optimized parameters included type of fiber, exposure time, desorption temperature and desorption time. Optimization was achieved with the analytes of P. c+V8A and V8A alone. To perform the HS-SPME, six types of fiber were assayed and compared: 7μm Polydimethylsiloxane (PDMS), 100μm Polydimethylsiloxane (PDMS), 50/30μm Divinylbenzene/CarboxenTM/Polydimethylsiloxane DVB/CAR/PDMS), 65μm Polydimethylsiloxane/Divinylbenzene (PDMS/DVB), 85μm Polyacrylate (PA) fibre and 85μm CarboxenTM/ Polydimethylsiloxane (Carboxen™/PDMS). In a comparison of the efficacy of the fibers, the bipolar fiber DVB/CAR/PDMS had a higher extraction efficiency than the other fibers. An exposure time of 16h with DVB/CAR/PDMS fiber in the sample headspace was enough to reach the maximum extraction efficiency. A desorption time of 3min in the GC injector with the desorption temperature of 250°C was enough for the fiber to desorb the compounds of interest. The chromatograms and morphology study confirmed that the VOCs from P. c+V8A had distinct differences from V8A alone, as did different life cycle stages of P. c and different taxa such as Pythium dissotocum. The study proved that P. c has species and life cycle specific VOCs, which in turn demonstrated the feasibility of this method as means of

Keywords: Gas chromatography, headspace solid-phase microextraction, optimization, volatile compounds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1872
112 An Epidemiological Study on an Outbreak of Gastroenteritis Linked to Dinner Served at a Senior High School in Accra

Authors: Benjamin Osei Tutu, Rita Asante, Emefa Atsu

Abstract:

Background: An outbreak of gastroenteritis occurred in December 2019 after students of a Senior High School in Accra were served with kenkey and fish during their dinner. An investigation was conducted to characterize the affected people, the source of contamination, the etiologic food and agent. Methods: An epidemiological study was conducted with cases selected from the student population who were ill. Controls were selected from among students who also ate from the school canteen during dinner but were not ill. Food history of each case and control was taken to assess their exposure status. Epi Info 7 was used to analyze the data obtained from the outbreak. Attack rates and odds ratios were calculated to determine the risk of foodborne infection for each of the foods consumed by the population. The source of contamination of the foods was ascertained by conducting an environmental risk assessment at the school. Results: Data were obtained from 126 students, out of which 57 (45.2%) were cases and 69 (54.8%) were controls. The cases presented with symptoms such as diarrhea (85.96%), abdominal cramps (66.67%), vomiting (50.88%), headache (21.05%), fever (17.86%) and nausea (3.51%). The peak incubation period was 18 hours with a minimum and maximum incubation periods of 6 and 50 hours respectively. From the incubation period, duration of illness and the symptoms, non-typhoidal salmonellosis was suspected. Multivariate analysis indicated that the illness was associated with the consumption of the fried fish served, however this was statistically insignificant (AOR 3.1.00, P = 0.159). No stool, blood or food samples were available for organism isolation and confirmation of suspected etiologic agent. The environmental risk assessment indicated poor hand washing practices on the part of both the food handlers and students. Conclusion: The outbreak could probably be due to the consumption of the fried fish that might have been contaminated with Salmonella sp. as a result of poor hand washing practices in the school.

Keywords: Case control study, food poisoning, handwashing, Salmonella, school.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 650
111 Exploring the Perspective of Service Quality in mHealth Services during the COVID-19 Pandemic

Authors: Wan-I Lee, Nelio Mendoza Figueredo

Abstract:

The impact of COVID-19 has a significant effect on all sectors of society globally. Health information technology (HIT) has become an effective health strategy in this age of distancing. In this regard, Mobile Health (mHealth) plays a critical role in managing patient and provider workflows during the COVID-19 pandemic. Therefore, the users' perception of service quality about mHealth services plays a significant role in shaping confidence and subsequent behaviors regarding the mHealth users' intention of use. This study's objective was to explore levels of user attributes analyzed by a qualitative method of how health practitioners and patients are satisfied or dissatisfied with using mHealth services; and analyzed the users' intention in the context of Taiwan during the COVID-19 pandemic. This research explores the experienced usability of a mHealth services during the Covid-19 pandemic. This study uses qualitative methods that include in-depth and semi-structured interviews that investigate participants' perceptions and experiences and the meanings they attribute to them. The five cases consisted of health practitioners, clinic staff, and patients' experiences using mHealth services. This study encourages participants to discuss issues related to the research question by asking open-ended questions, usually in one-to-one interviews. The findings show the positive and negative attributes of mHealth service quality. Hence, the significant importance of patients' and health practitioners' issues on several dimensions of perceived service quality is system quality, information quality, and interaction quality. A concept map for perceptions regards to emergency uses' intention of mHealth services process is depicted. The findings revealed that users pay more attention to "Medical care", "ease of use" and "utilitarian benefits" and have less importance for "Admissions and Convenience" and "Social influence". To improve mHealth services, the mHealth providers and health practitioners should better manage users' experiences to enhance mHealth services. This research contributes to the understanding of service quality issues in mHealth services during the COVID-19 pandemic.

Keywords: COVID-19, mobile health, mHealth, service quality, use intention.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 675
110 A Study on the Differential Diagnostic Model for Newborn Hearing Loss Screening

Authors: Chun-Lang Chang

Abstract:

According to the statistics, the prevalence of congenital hearing loss in Taiwan is approximately six thousandths; furthermore, one thousandths of infants have severe hearing impairment. Hearing ability during infancy has significant impact in the development of children-s oral expressions, language maturity, cognitive performance, education ability and social behaviors in the future. Although most children born with hearing impairment have sensorineural hearing loss, almost every child more or less still retains some residual hearing. If provided with a hearing aid or cochlear implant (a bionic ear) timely in addition to hearing speech training, even severely hearing-impaired children can still learn to talk. On the other hand, those who failed to be diagnosed and thus unable to begin hearing and speech rehabilitations on a timely manner might lose an important opportunity to live a complete and healthy life. Eventually, the lack of hearing and speaking ability will affect the development of both mental and physical functions, intelligence, and social adaptability. Not only will this problem result in an irreparable regret to the hearing-impaired child for the life time, but also create a heavy burden for the family and society. Therefore, it is necessary to establish a set of computer-assisted predictive model that can accurately detect and help diagnose newborn hearing loss so that early interventions can be provided timely to eliminate waste of medical resources. This study uses information from the neonatal database of the case hospital as the subjects, adopting two different analysis methods of using support vector machine (SVM) for model predictions and using logistic regression to conduct factor screening prior to model predictions in SVM to examine the results. The results indicate that prediction accuracy is as high as 96.43% when the factors are screened and selected through logistic regression. Hence, the model constructed in this study will have real help in clinical diagnosis for the physicians and actually beneficial to the early interventions of newborn hearing impairment.

Keywords: Data mining, Hearing impairment, Logistic regression analysis, Support vector machines

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1791
109 Deformation Characteristics of Fire Damaged and Rehabilitated Normal Strength Concrete Beams

Authors: Yeo Kyeong Lee, Hae Won Min, Ji Yeon Kang, Hee Sun Kim, Yeong Soo Shin

Abstract:

In recent years, fire accidents have been steadily increased and the amount of property damage caused by the accidents has gradually raised. Damaging building structure, fire incidents bring about not only such property damage but also strength degradation and member deformation. As a result, the building structure undermines its structural ability. Examining the degradation and the deformation is very important because reusing the building is more economical than reconstruction. Therefore, engineers need to investigate the strength degradation and member deformation well, and make sure that they apply right rehabilitation methods. This study aims at evaluating deformation characteristics of fire damaged and rehabilitated normal strength concrete beams through both experiments and finite element analyses. For the experiments, control beams, fire damaged beams and rehabilitated beams are tested to examine deformation characteristics. Ten test beam specimens with compressive strength of 21MPa are fabricated and main test variables are selected as cover thickness of 40mm and 50mm and fire exposure time of 1 hour or 2 hours. After heating, fire damaged beams are air-recurred for 2 months and rehabilitated beams are repaired with polymeric cement mortar after being removed the fire damaged concrete cover. All beam specimens are tested under four points loading. FE analyses are executed to investigate the effects of main parameters applied to experimental study. Test results show that both maximum load and stiffness of the rehabilitated beams are higher than those of the fire damaged beams. In addition, predicted structural behaviors from the analyses also show good rehabilitation effect and the predicted load-deflection curves are similar to the experimental results. For the further, the proposed analytical method can be used to predict deformation characteristics of fire damaged and rehabilitated concrete beams without suffering from time and cost consuming of experimental process.

Keywords: Fire, Normal strength concrete, Rehabilitation, Reinforced concrete beam.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2377
108 Corporate Information System Educational Center

Authors: Alquliyev R.M., Kazimov T.H., Mahmudova Sh.C., Mahmudova R.Sh.

Abstract:

The given work is devoted to the description of Information Technologies NAS of Azerbaijan created and successfully maintained in Institute. On the basis of the decision of board of the Supreme Certifying commission at the President of the Azerbaijan Republic and Presidium of National Academy of Sciences of the Azerbaijan Republic, the organization of training courses on Computer Sciences for all post-graduate students and dissertators of the republic, taking of examinations of candidate minima, it was on-line entrusted to Institute of Information Technologies of the National Academy of Sciences of Azerbaijan. Therefore, teaching the computer sciences to post-graduate students and dissertators a scientific - methodological manual on effective application of new information technologies for research works by post-graduate students and dissertators and taking of candidate minima is carried out in the Educational Center. Information and communication technologies offer new opportunities and prospects of their application for teaching and training. The new level of literacy demands creation of essentially new technology of obtaining of scientific knowledge. Methods of training and development, social and professional requirements, globalization of the communicative economic and political projects connected with construction of a new society, depends on a level of application of information and communication technologies in the educational process. Computer technologies develop ideas of programmed training, open completely new, not investigated technological ways of training connected to unique opportunities of modern computers and telecommunications. Computer technologies of training are processes of preparation and transfer of the information to the trainee by means of computer. Scientific and technical progress as well as global spread of the technologies created in the most developed countries of the world is the main proof of the leading role of education in XXI century. Information society needs individuals having modern knowledge. In practice, all technologies, using special technical information means (computer, audio, video) are called information technologies of education.

Keywords: Educational Center, post-graduate, database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702
107 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: Convolutional neural networks, coffee bean, peaberry, sorting, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518
106 Robot-assisted Relaxation Training for Children with Autism Spectrum Disorders

Authors: V. Holeva, V. Aliki Nikopoulou, P. Kechayas, M. Dialechti Kerasidou, M. Papadopoulou, G. A. Papakostas, V. G. Kaburlasos, A. Evangeliou

Abstract:

Cognitive Behavioral Therapy (CBT) has been proven an effective tool to address anger and anxiety issues in children and adolescents with Autism Spectrum Disorders (ASD). Robot-enhanced therapy has been used in psychosocial and educational interventions for children with ASD with promising results. Whenever CBT-based techniques were incorporated in robot-based interventions, they were mainly performed in group sessions. Objectives: The study’s main objective was the implementation and evaluation of the effectiveness of a relaxation training intervention for children with ASD, delivered by the social robot NAO. Methods: 20 children (aged 7–12 years) were randomly assigned to 16 sessions of relaxation training implemented twice a week. Two groups were formed: the NAO group (children participated in individual sessions with the support of NAO) and the control group (children participated in individual sessions with the support of the therapist only). Participants received three different relaxation scenarios of increasing difficulty (a breathing scenario, a progressive muscle relaxation scenario and a body scan medication scenario), as well as related homework sheets for practicing. Pre- and post-intervention assessments were conducted using the Child Behavior Checklist (CBCL) and the Strengths and Difficulties Questionnaire for parents (SDQ-P). Participants were also asked to complete an open-ended questionnaire to evaluate the effectiveness of the training. Parents’ satisfaction was evaluated via a questionnaire and children satisfaction was assessed by a thermometer scale. Results: The study supports the use of relaxation training with the NAO robot as instructor for children with ASD. Parents of enrolled children reported high levels of satisfaction and provided positive ratings of the training acceptability. Children in the NAO group presented greater motivation to complete homework and adopt the learned techniques at home. Conclusions: Relaxation training could be effectively integrated in robot-assisted protocols to help children with ASD regulate emotions and develop self-control.

Keywords: Autism spectrum disorders, CBT, children relaxation training, robot-assisted therapy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 886
105 Material Concepts and Processing Methods for Electrical Insulation

Authors: R. Sekula

Abstract:

Epoxy composites are broadly used as an electrical insulation for the high voltage applications since only such materials can fulfill particular mechanical, thermal, and dielectric requirements. However, properties of the final product are strongly dependent on proper manufacturing process with minimized material failures, as too large shrinkage, voids and cracks. Therefore, application of proper materials (epoxy, hardener, and filler) and process parameters (mold temperature, filling time, filling velocity, initial temperature of internal parts, gelation time), as well as design and geometric parameters are essential features for final quality of the produced components. In this paper, an approach for three-dimensional modeling of all molding stages, namely filling, curing and post-curing is presented. The reactive molding simulation tool is based on a commercial CFD package, and include dedicated models describing viscosity and reaction kinetics that have been successfully implemented to simulate the reactive nature of the system with exothermic effect. Also a dedicated simulation procedure for stress and shrinkage calculations, as well as simulation results are presented in the paper. Second part of the paper is dedicated to recent developments on formulations of functional composites for electrical insulation applications, focusing on thermally conductive materials. Concepts based on filler modifications for epoxy electrical composites have been presented, including the results of the obtained properties. Finally, having in mind tough environmental regulations, in addition to current process and design aspects, an approach for product re-design has been presented focusing on replacement of epoxy material with the thermoplastic one. Such “design-for-recycling” method is one of new directions associated with development of new material and processing concepts of electrical products and brings a lot of additional research challenges. For that, one of the successful products has been presented to illustrate the presented methodology.

Keywords: Curing, epoxy insulation, numerical simulations, recycling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1621
104 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market

Authors: Cristian Păuna

Abstract:

In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.

Keywords: Algorithmic trading, automated investment system, DAX Deutscher Aktienindex.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 679
103 Enhancing Learning for Research Higher Degree Students

Authors: Jenny Hall, Alison Jaquet

Abstract:

Universities’ push toward the production of high quality research is not limited to academic staff and experienced researchers. In this environment of research rich agendas, Higher Degree Research (HDR) students are increasingly expected to engage in the publishing of good quality papers in high impact journals. IFN001: Advanced Information Research Skills (AIRS) is a credit bearing mandatory coursework requirement for Queensland University of Technology (QUT) doctorates. Since its inception in 1989, this unique blended learning program has provided the foundations for new researchers to produce original and innovative research. AIRS was redeveloped in 2012, and has now been evaluated with reference to the university’s strategic research priorities. Our research is the first comprehensive evaluation of the program from the learner perspective. We measured whether the program develops essential transferrable skills and graduate capabilities to ensure best practice in the areas of publishing and data management. In particular, we explored whether AIRS prepares students to be agile researchers with the skills to adapt to different research contexts both within and outside academia. The target group for our study consisted of HDR students and supervisors at QUT. Both quantitative and qualitative research methods were used for data collection. Gathering data was by survey and focus groups with qualitative responses analyzed using NVivo. The results of the survey show that 82% of students surveyed believe that AIRS assisted their research process and helped them learn skills they need as a researcher. The 18% of respondents who expressed reservation about the benefits of AIRS were also examined to determine the key areas of concern. These included trends related to the timing of the program early in the candidature and a belief among some students that their previous research experience was sufficient for postgraduate study. New insights have been gained into how to better support HDR learners in partnership with supervisors and how to enhance learning experiences of specific cohorts, including international students and mature learners.

Keywords: Data management, enhancing learning experience, publishing, research higher degree students.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1461
102 Perception of Predictive Confounders for the Prevalence of Hypertension among Iraqi Population: A Pilot Study

Authors: Zahraa Albasry, Hadeel D. Najim, Anmar Al-Taie

Abstract:

Background: Hypertension is considered as one of the most important causes of cardiovascular complications and one of the leading causes of worldwide mortality. Identifying the potential risk factors associated with this medical health problem plays an important role in minimizing its incidence and related complications. The objective of this study is to explore the prevalence of receptor sensitivity regarding assess and understand the perception of specific predictive confounding factors on the prevalence of hypertension (HT) among a sample of Iraqi population in Baghdad, Iraq. Materials and Methods: A randomized cross sectional study was carried out on 100 adult subjects during their visit to the outpatient clinic at a certain sector of Baghdad Province, Iraq. Demographic, clinical and health records alongside specific screening and laboratory tests of the participants were collected and analyzed to detect the potential of confounding factors on the prevalence of HT. Results: 63% of the study participants suffered from HT, most of them were female patients (P < 0.005). Patients aged between 41-50 years old significantly suffered from HT than other age groups (63.5%, P < 0.001). 88.9% of the participants were obese (P < 0.001) and 47.6% had diabetes with HT. Positive family history and sedentary lifestyle were significantly higher among all hypertensive groups (P < 0.05). High salt and fatty food intake was significantly found among patients suffered from isolated systolic hypertension (ISHT) (P < 0.05). A significant positive correlation between packed cell volume (PCV) and systolic blood pressure (SBP) (r = 0.353, P = 0.048) found among normotensive participants. Among hypertensive patients, a positive significant correlation found between triglycerides (TG) and both SBP (r = 0.484, P = 0.031) and diastolic blood pressure (DBP) (r = 0.463, P = 0.040), while low density lipoprotein-cholesterol (LDL-c) showed a positive significant correlation with DBP (r = 0.443, P = 0.021). Conclusion: The prevalence of HT among Iraqi populations is of major concern. Further consideration is required to detect the impact of potential risk factors and to minimize blood pressure (BP) elevation and reduce the risk of other cardiovascular complications later in life.

Keywords: Correlation, hypertension, Iraq, risk factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 901
101 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction

Authors: Talal Alsulaiman, Khaldoun Khashanah

Abstract:

In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent’s attributes. Also, the influence of social networks in the developing of agents interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.

Keywords: Artificial stock markets, agent based simulation, bounded rationality, behavioral finance, artificial neural network, interaction, scale-free networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2516
100 Long-Term Follow-up of Dynamic Balance, Pain and Functional Performance in Cruciate Retaining and Posterior Stabilized Total Knee Arthroplasty

Authors: Ahmed R. Z. Baghdadi, Mona H. Gamal Eldein

Abstract:

Background: With the perceived pain and poor function experienced following knee arthroplasty, patients usually feel un-satisfied. Yet, a controversy still persists on the appropriate operative technique that doesn’t affect proprioception much. Purpose: This study compared the effects of Cruciate Retaining (CR) and Posterior Stabilized (PS) total knee arthroplasty (TKA on dynamic balance, pain and functional performance following rehabilitation. Methods: Thirty patients with CRTKA (group I), thirty with PSTKA (group II) and fifteen indicated for arthroplasty but weren’t operated on yet (group III) participated in the study. The mean age was 54.53±3.44, 55.13±3.48 and 55.33±2.32 years and BMI 35.7±3.03, 35.7±1.99 and 35.73±1.03 kg/m2 for groups I, II and III respectively. The Berg Balance Scale (BBS), WOMAC pain subscale and Timed Up-and-Go (TUG) and Stair-Climbing (SC) tests were used for assessment. Assessments were conducted four weeks preand post-operatively, three, six and twelve months post-operatively with the control group being assessed at the same time intervals. The post-operative rehabilitation involved hospitalization (1st week), home-based (2nd-4th weeks), and outpatient clinic (5th-12th weeks) programs, follow-up to all groups for twelve months. Results: The Mixed design MANOVA revealed that group I had significantly lower pain scores and SC time compared with group II three, six and twelve months post-operatively. Moreover, the BBS scores increased significantly and the pain scores and TUG and SC time decreased significantly six months post-operatively compared with four weeks pre- and post-operatively and three months postoperatively in groups I and II with the opposite being true four weeks post-operatively. But no significant differences in BBS scores, pain scores and TUG and SC time between six and twelve months postoperatively in groups I and II. Interpretation/Conclusion: CRTKA is preferable to PSTKA, possibly due to the preserved human proprioceptors in the un-excised PCL.

Keywords: Dynamic Balance, Functional Performance, Knee Arthroplasty, Long-Term.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2047
99 Genotypic and Allelic Distribution of Polymorphic Variants of Gene SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) and Their Association to the Clinical Response to Metformin in Adult Pakistani T2DM Patients

Authors: Sadaf Moeez, Madiha Khalid, Zoya Khalid, Sania Shaheen, Sumbul Khalid

Abstract:

Background: Inter-individual variation in response to metformin, which has been considered as a first line therapy for T2DM treatment is considerable. In the current study, it was aimed to investigate the impact of two genetic variants Leu125Phe (rs77474263) and Gly64Asp (rs77630697) in gene SLC47A1 on the clinical efficacy of metformin in T2DM Pakistani patients. Methods: The study included 800 T2DM patients (400 metformin responders and 400 metformin non-responders) along with 400 ethnically matched healthy individuals. The genotypes were determined by allele-specific polymerase chain reaction. In-silico analysis was done to confirm the effect of the two SNPs on the structure of genes. Association was statistically determined using SPSS software. Results: Minor allele frequency for rs77474263 and rs77630697 was 0.13 and 0.12. For SLC47A1 rs77474263 the homozygotes of one mutant allele ‘T’ (CT) of rs77474263 variant were fewer in metformin responders than metformin non-responders (29.2% vs. 35.5 %). Likewise, the efficacy was further reduced (7.2% vs. 4.0 %) in homozygotes of two copies of ‘T’ allele (TT). Remarkably, T2DM cases with two copies of allele ‘C’ (CC) had 2.11 times more probability to respond towards metformin monotherapy. For SLC47A1 rs77630697 the homozygotes of one mutant allele ‘A’ (GA) of rs77630697 variant were fewer in metformin responders than metformin non-responders (33.5% vs. 43.0 %). Likewise, the efficacy was further reduced (8.5% vs. 4.5%) in homozygotes of two copies of ‘A’ allele (AA). Remarkably, T2DM cases with two copies of allele ‘G’ (GG) had 2.41 times more probability to respond towards metformin monotherapy. In-silico analysis revealed that these two variants affect the structure and stability of their corresponding proteins. Conclusion: The present data suggest that SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) polymorphisms were associated with the therapeutic response of metformin in T2DM patients of Pakistan.

Keywords: Diabetes, T2DM, SLC47A1, Pakistan, polymorphism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 723
98 Early Melt Season Variability of Fast Ice Degradation Due to Small Arctic Riverine Heat Fluxes

Authors: Grace E. Santella, Shawn G. Gallaher, Joseph P. Smith

Abstract:

In order to determine the importance of small-system riverine heat flux on regional landfast sea ice breakup, our study explores the annual spring freshet of the Sagavanirktok River from 2014-2019. Seasonal heat cycling ultimately serves as the driving mechanism behind the freshet; however, as an emerging area of study, the extent to which inland thermodynamics influence coastal tundra geomorphology and connected landfast sea ice has not been extensively investigated in relation to small-scale Arctic river systems. The Sagavanirktok River is a small-to-midsized river system that flows south-to-north on the Alaskan North Slope from the Brooks mountain range to the Beaufort Sea at Prudhoe Bay. Seasonal warming in the spring rapidly melts snow and ice in a northwards progression from the Brooks Range and transitional tundra highlands towards the coast and when coupled with seasonal precipitation, results in a pulsed freshet that propagates through the Sagavanirktok River. The concentrated presence of newly exposed vegetation in the transitional tundra region due to spring melting results in higher absorption of solar radiation due to a lower albedo relative to snow-covered tundra and/or landfast sea ice. This results in spring flood runoff that advances over impermeable early-season permafrost soils with elevated temperatures relative to landfast sea ice and sub-ice flow. We examine the extent to which interannual temporal variability influences the onset and magnitude of river discharge by analyzing field measurements from the United States Geological Survey (USGS) river and meteorological observation sites. Rapid influx of heat to the Arctic Ocean via riverine systems results in a noticeable decay of landfast sea ice independent of ice breakup seaward of the shear zone. Utilizing MODIS imagery from NASA’s Terra satellite, interannual variability of river discharge is visualized, allowing for optical validation that the discharge flow is interacting with landfast sea ice. Thermal erosion experienced by sediment fast ice at the arrival of warm overflow preconditions the ice regime for rapid thawing. We investigate the extent to which interannual heat flux from the Sagavanirktok River’s freshet significantly influences the onset of local landfast sea ice breakup. The early-season warming of atmospheric temperatures is evidenced by the presence of storms which introduce liquid, rather than frozen, precipitation into the system. The resultant decreased albedo of the transitional tundra supports the positive relationship between early-season precipitation events, inland thermodynamic cycling, and degradation of landfast sea ice. Early removal of landfast sea ice increases coastal erosion in these regions and has implications for coastline geomorphology which stress industrial, ecological, and humanitarian infrastructure.

Keywords: Albedo, freshet, landfast sea ice, riverine heat flux, seasonal heat cycling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 437
97 A Comprehensive Survey on Machine Learning Techniques and User Authentication Approaches for Credit Card Fraud Detection

Authors: Niloofar Yousefi, Marie Alaghband, Ivan Garibay

Abstract:

With the increase of credit card usage, the volume of credit card misuse also has significantly increased, which may cause appreciable financial losses for both credit card holders and financial organizations issuing credit cards. As a result, financial organizations are working hard on developing and deploying credit card fraud detection methods, in order to adapt to ever-evolving, increasingly sophisticated defrauding strategies and identifying illicit transactions as quickly as possible to protect themselves and their customers. Compounding on the complex nature of such adverse strategies, credit card fraudulent activities are rare events compared to the number of legitimate transactions. Hence, the challenge to develop fraud detection that are accurate and efficient is substantially intensified and, as a consequence, credit card fraud detection has lately become a very active area of research. In this work, we provide a survey of current techniques most relevant to the problem of credit card fraud detection. We carry out our survey in two main parts. In the first part, we focus on studies utilizing classical machine learning models, which mostly employ traditional transnational features to make fraud predictions. These models typically rely on some static physical characteristics, such as what the user knows (knowledge-based method), or what he/she has access to (object-based method). In the second part of our survey, we review more advanced techniques of user authentication, which use behavioral biometrics to identify an individual based on his/her unique behavior while he/she is interacting with his/her electronic devices. These approaches rely on how people behave (instead of what they do), which cannot be easily forged. By providing an overview of current approaches and the results reported in the literature, this survey aims to drive the future research agenda for the community in order to develop more accurate, reliable and scalable models of credit card fraud detection.

Keywords: credit card fraud detection, user authentication, behavioral biometrics, machine learning, literature survey

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 515
96 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups

Authors: Lily Ingsrisawang, Tasanee Nacharoen

Abstract:

The problems arising from unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many researchers have found that the performance of existing classifiers tends to be biased towards the majority class. The k-nearest neighbors’ nonparametric discriminant analysis is a method that was proposed for classifying unbalanced classes with good performance. In this study, the methods of discriminant analysis are of interest in investigating misclassification error rates for classimbalanced data of three diabetes risk groups. The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification of class-imbalanced data of diabetes risk groups. Data from a project maintaining healthy conditions for 599 employees of a government hospital in Bangkok were obtained for the classification problem. The employees were divided into three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data including the variables of diabetes risk group, age, gender, blood glucose, and BMI were analyzed and bootstrapped for 50 and 100 samples, 599 observations per sample, for additional estimation of the misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples showed nonnormality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. Searching the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions of (0.90:0.05:0.05), (0.80: 0.10: 0.10) and (0.70, 0.15, 0.15). The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k=3 or k=4 and the defined prior probabilities of non-risk: risk: diabetic as 0.90: 0.05:0.05 or 0.80:0.10:0.10 gave the smallest error rate of misclassification. The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.

Keywords: Bootstrap, diabetes risk groups, error rate, k-nearest neighbors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999
95 An Initial Assessment of the Potential Contribution of ‘Community Empowerment’ to Mitigating the Drivers of Deforestation and Forest Degradation, in Giam Siak Kecil-Bukit Batu Biosphere Reserve

Authors: A. Sunkar, Y. Santosa, S. B. Rushayati

Abstract:

Indonesia has experienced annual forest fires that have rapidly destroyed and degraded its forests. Fires in the peat swamp forests of Riau Province, have set the stage for problems to worsen, this being the ecosystem most prone to fires (which are also the most difficult, to extinguish). Despite various efforts to curb deforestation, and forest degradation processes, severe forest fires are still occurring. To find an effective solution, the basic causes of the problems must be identified. It is therefore critical to have an indepth understanding of the underlying causal factors that have contributed to deforestation and forest degradation as a whole, in order to attain reductions in their rates. An assessment of the drivers of deforestation and forest degradation was carried out, in order to design and implement measures that could slow these destructive processes. Research was conducted in Giam Siak Kecil–Bukit Batu Biosphere Reserve (GSKBB BR), in the Riau Province of Sumatera, Indonesia. A biosphere reserve was selected as the study site because such reserves aim to reconcile conservation with sustainable development. A biosphere reserve should promote a range of local human activities, together with development values that are in line spatially and economically with the area conservation values, through use of a zoning system. Moreover, GSKBB BR is an area with vast peatlands, and is experiencing forest fires annually. Various factors were analysed to assess the drivers of deforestation and forest degradation in GSKBB BR; data were collected from focus group discussions with stakeholders, key informant interviews with key stakeholders, field observation and a literature review. Landsat satellite imagery was used to map forest-cover changes for various periods. Analysis of landsat images, taken during the period 2010-2014, revealed that within the non-protected area of core zone, there was a trend towards decreasing peat swamp forest areas, increasing land clearance, and increasing areas of community oilpalm and rubber plantations. Fire was used for land clearing and most of the forest fires occurred in the most populous area (the transition area). The study found a relationship between the deforested/ degraded areas, and certain distance variables, i.e. distance from roads, villages and the borders between the core area and the buffer zone. The further the distance from the core area of the reserve, the higher was the degree of deforestation and forest degradation. Research findings suggested that agricultural expansion may be the direct cause of deforestation and forest degradation in the reserve, whereas socio-economic factors were the underlying driver of forest cover changes; such factors consisting of a combination of sociocultural, infrastructural, technological, institutional (policy and governance), demographic (population pressure) and economic (market demand) considerations. These findings indicated that local factors/problems were the critical causes of deforestation and degradation in GSKBB BR. This research therefore concluded that reductions in deforestation and forest degradation in GSKBB BR could be achieved through ‘local actor’-tailored approaches such as community empowerment.

Keywords: Actor-led solution, community empowerment, drivers of deforestation and forest degradation, Giam Siak Kecil– Bukit Batu Biosphere Reserve.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1998
94 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling

Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal

Abstract:

Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.

Keywords: Benchmark collection, program educational objectives, student outcomes, ABET, Accreditation, machine learning, supervised multiclass classification, text mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 825
93 Structural Parsing of Natural Language Text in Tamil Using Phrase Structure Hybrid Language Model

Authors: Selvam M, Natarajan. A M, Thangarajan R

Abstract:

Parsing is important in Linguistics and Natural Language Processing to understand the syntax and semantics of a natural language grammar. Parsing natural language text is challenging because of the problems like ambiguity and inefficiency. Also the interpretation of natural language text depends on context based techniques. A probabilistic component is essential to resolve ambiguity in both syntax and semantics thereby increasing accuracy and efficiency of the parser. Tamil language has some inherent features which are more challenging. In order to obtain the solutions, lexicalized and statistical approach is to be applied in the parsing with the aid of a language model. Statistical models mainly focus on semantics of the language which are suitable for large vocabulary tasks where as structural methods focus on syntax which models small vocabulary tasks. A statistical language model based on Trigram for Tamil language with medium vocabulary of 5000 words has been built. Though statistical parsing gives better performance through tri-gram probabilities and large vocabulary size, it has some disadvantages like focus on semantics rather than syntax, lack of support in free ordering of words and long term relationship. To overcome the disadvantages a structural component is to be incorporated in statistical language models which leads to the implementation of hybrid language models. This paper has attempted to build phrase structured hybrid language model which resolves above mentioned disadvantages. In the development of hybrid language model, new part of speech tag set for Tamil language has been developed with more than 500 tags which have the wider coverage. A phrase structured Treebank has been developed with 326 Tamil sentences which covers more than 5000 words. A hybrid language model has been trained with the phrase structured Treebank using immediate head parsing technique. Lexicalized and statistical parser which employs this hybrid language model and immediate head parsing technique gives better results than pure grammar and trigram based model.

Keywords: Hybrid Language Model, Immediate Head Parsing, Lexicalized and Statistical Parsing, Natural Language Processing, Parts of Speech, Probabilistic Context Free Grammar, Tamil Language, Tree Bank.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3632
92 Evaluating the Performance of Organic, Inorganic and Liquid Sheep Manure on Growth, Yield and Nutritive Value of Hybrid Napier CO-3

Authors: F. A. M. Safwan, H. N. N. Dilrukshi, P. U. S. Peiris

Abstract:

Less availability of high quality green forages leads to low productivity of national dairy herd of Sri Lanka. Growing grass and fodder to suit the production system is an efficient and economical solution for this problem. CO-3 is placed in a higher category, especially on tillering capacity, green forage yield, regeneration capacity, leaf to stem ratio, high crude protein content, resistance to pests and diseases and free from adverse factors along with other fodder varieties grown within the country. An experiment was designed to determine the effect of organic sheep manure, inorganic fertilizers and liquid sheep manure on growth, yield and nutritive value of CO-3. The study was consisted with three treatments; sheep manure (T1), recommended inorganic fertilizers (T2) and liquid sheep manure (T3) which was prepared using bucket fermentation method and each treatment was consisted with three replicates and those were assigned randomly. First harvest was obtained after 40 days of plant establishment and number of leaves (NL), leaf area (LA), tillering capacity (TC), fresh weight (FW) and dry weight (DW) were recorded and second harvest was obtained after 30 days of first harvest and same set of data were recorded. SPSS 16 software was used for data analysis. For proximate analysis AOAC, 2000 standard methods were used. Results revealed that the plants treated with T1 recorded highest NL, LA, TC, FW and DW and were statistically significant at first and second harvest of CO-3 (p˂ 0.05) and it was found that T1 was statistically significant from T2 and T3. Although T3 was recorded higher than the T2 in almost all growth parameters; it was not statistically significant (p ˃0.05). In addition, the crude protein content was recorded highest in T1 with the value of 18.33±1.61 and was lowest in T2 with the value of 10.82±1.14 and was statistically significant (p˂ 0.05). Apart from this, other proximate composition crude fiber, crude fat, ash, moisture content and dry matter were not statistically significant between treatments (p ˃0.05). In accordance with the results, it was found that the organic fertilizer is the best fertilizer for CO-3 in terms of growth parameters and crude protein content.

Keywords: Fertilizer, growth parameters, Hybrid Napier CO-3, proximate composition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1367
91 Species Profiling of White Grub Beetles and Evaluation of Pre and Post Sown Application of Insecticides against White Grub Infesting Soybean

Authors: Ajay Kumar Pandey, Mayank Kumar

Abstract:

White grub (Coleoptera: Scarabaeidae) is a major destructive pest in western Himalayan region of Uttarakhand. Beetles feed on apple, apricot, plum, walnut etc. during night while, second and third instar grubs feed on live roots of cultivated as well as non-cultivated crops. Collection and identification of scarab beetles through light trap was carried out at Crop Research Centre, Govind Ballab Pant University Pantnagar, Udham Singh Nagar (Uttarakhand) during 2018. Field trials were also conducted in 2018 to evaluate pre and post sown application of different insecticides against the white grub infesting soybean. The insecticides like Carbofuran 3 Granule (G) (750 g a.i./ha), Clothianidin 50 Water Dispersal Granule (WG) (120 g a.i./ha), Fipronil 0.3 G (50 g a.i./ha), Thiamethoxam 25 WG (80 g a.i./ha), Imidacloprid 70 WG (300 g a.i./ha), Chlorantraniliprole 0.4% G(100 g a.i./ha) and mixture of Fipronil 40% and Imidacloprid 40% WG (300 g a.i./ha) were applied at the time of sowing in pre sown experiment while same dosage of insecticides were applied in standing soybean crop during (first fortnight of July). Commutative plant mortality data were recorded after 20, 40, 60 days intervals and compared with untreated control. Total 23 species of white grub beetles recorded on the light trap and Holotrichia serrata Fabricious (Coleoptera: Melolonthinae) was found to be predominant species by recording 20.6% relative abundance out of the total light trap catch (i.e. 1316 beetles) followed by Phyllognathus sp. (14.6% relative abundance). H. rosettae and Heteronychus lioderus occupied third and fourth rank with 11.85% and 9.65% relative abundance, respectively. The emergence of beetles of predominant species started from 15th March, 2018. In April, average light trap catch was 382 white grub beetles, however, peak emergence of most of the white grub species was observed from June to July, 2018 i.e. 336 beetles in June followed by 303 beetles in the July. On the basis of the emergence pattern of white grub beetles, it may be concluded that the Peak Emergence Period (PEP) for the beetles of H. serrata was second fortnight of April for the total period of 15 days. In May, June and July relatively low population of H. serrata was observed. A decreasing trend in light trap catch was observed and went on till September during the study. No single beetle of H. serrata was observed on light trap from September onwards. The cumulative plant mortality data in both the experiments revealed that all the insecticidal treatments were significantly superior in protection-wise (6.49-16.82% cumulative plant mortality) over untreated control where highest plant mortality was 17.28 to 39.65% during study. The mixture of Fipronil 40% and Imidacloprid 40% WG applied at the rate of 300 g a.i. per ha proved to be most effective having lowest plant mortality i.e. 9.29 and 10.94% in pre and post sown crop, followed by Clothianidin 50 WG (120 g a.i. per ha) where the plant mortality was 10.57 and 11.93% in pre and post sown treatments, respectively. Both treatments were found significantly at par among each other. Production-wise, all the insecticidal treatments were found statistically superior (15.00-24.66 q per ha grain yields) over untreated control where the grain yield was 8.25 & 9.13 q per ha. Treatment Fipronil 40% + Imidacloprid 40% WG applied at the rate of 300 g a.i. per ha proved to be most effective and significantly superior over Imidacloprid 70WG applied at the rate of 300 g a.i. per ha.

Keywords: Bio efficacy, insecticide, Holotrichia, soybean, white grub.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 675
90 Tools and Techniques in Risk Assessment in Public Risk Management Organisations

Authors: Atousa Khodadadyan, Gabe Mythen, Hirbod Assa, Beverley Bishop

Abstract:

Risk assessment and the knowledge provided through this process is a crucial part of any decision-making process in the management of risks and uncertainties. Failure in assessment of risks can cause inadequacy in the entire process of risk management, which in turn can lead to failure in achieving organisational objectives as well as having significant damaging consequences on populations affected by the potential risks being assessed. The choice of tools and techniques in risk assessment can influence the degree and scope of decision-making and subsequently the risk response strategy. There are various available qualitative and quantitative tools and techniques that are deployed within the broad process of risk assessment. The sheer diversity of tools and techniques available to practitioners makes it difficult for organisations to consistently employ the most appropriate methods. This tools and techniques adaptation is rendered more difficult in public risk regulation organisations due to the sensitive and complex nature of their activities. This is particularly the case in areas relating to the environment, food, and human health and safety, when organisational goals are tied up with societal, political and individuals’ goals at national and international levels. Hence, recognising, analysing and evaluating different decision support tools and techniques employed in assessing risks in public risk management organisations was considered. This research is part of a mixed method study which aimed to examine the perception of risk assessment and the extent to which organisations practise risk assessment’ tools and techniques. The study adopted a semi-structured questionnaire with qualitative and quantitative data analysis to include a range of public risk regulation organisations from the UK, Germany, France, Belgium and the Netherlands. The results indicated the public risk management organisations mainly use diverse tools and techniques in the risk assessment process. The primary hazard analysis; brainstorming; hazard analysis and critical control points were described as the most practiced risk identification techniques. Within qualitative and quantitative risk analysis, the participants named the expert judgement, risk probability and impact assessment, sensitivity analysis and data gathering and representation as the most practised techniques.

Keywords: Decision-making, public risk management organisations, risk assessment, tools and techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1628
89 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman

Abstract:

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.

Keywords: Autonomous surveillance, Bayesian reasoning, decision-support, interventions, patterns-of-life, predictive analytics, predictive insights.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 523
88 The Characteristics of Static Plantar Loading in the First-Division College Sprint Athletes

Authors: Tong-Hsien Chow

Abstract:

Background: Plantar pressure measurement is an effective method for assessing plantar loading and can be applied to evaluating movement performance of the foot. The purpose of this study is to explore the sprint athletes’ plantar loading characteristics and pain profiles in static standing. Methods: Experiments were undertaken on 80 first-division college sprint athletes and 85 healthy non-sprinters. ‘JC Mat’, the optical plantar pressure measurement was applied to examining the differences between both groups in the arch index (AI), three regional and six distinct sub-regional plantar pressure distributions (PPD), and footprint characteristics. Pain assessment and self-reported health status in sprint athletes were examined for evaluating their common pain areas. Results: Findings from the control group, the males’ AI fell into the normal range. Yet, the females’ AI was classified as the high-arch type. AI values of the sprint group were found to be significantly lower than the control group. PPD were higher at the medial metatarsal bone of both feet and the lateral heel of the right foot in the sprint group, the males in particular, whereas lower at the medial and lateral longitudinal arches of both feet. Footprint characteristics tended to support the results of the AI and PPD, and this reflected the corresponding pressure profiles. For the sprint athletes, the lateral knee joint and biceps femoris were the most common musculoskeletal pains. Conclusions: The sprint athletes’ AI were generally classified as high arches, and that their PPD were categorized between the features of runners and high-arched runners. These findings also correspond to the profiles of patellofemoral pain syndrome (PFPS)-related plantar pressure. The pain profiles appeared to correspond to the symptoms of high-arched runners and PFPS. The findings reflected upon the possible link between high arches and PFPS. The correlation between high-arched runners and PFPS development is worth further studies.

Keywords: Sprint athletes, arch index, plantar pressure distributions, high arches, patellofemoral pain syndrome.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1822
87 Application of Gamma Frailty Model in Survival of Liver Cirrhosis Patients

Authors: Elnaz Saeedi, Jamileh Abolaghasemi, Mohsen Nasiri Tousi, Saeedeh Khosravi

Abstract:

Goals and Objectives: A typical analysis of survival data involves the modeling of time-to-event data, such as the time till death. A frailty model is a random effect model for time-to-event data, where the random effect has a multiplicative influence on the baseline hazard function. This article aims to investigate the use of gamma frailty model with concomitant variable in order to individualize the prognostic factors that influence the liver cirrhosis patients’ survival times. Methods: During the one-year study period (May 2008-May 2009), data have been used from the recorded information of patients with liver cirrhosis who were scheduled for liver transplantation and were followed up for at least seven years in Imam Khomeini Hospital in Iran. In order to determine the effective factors for cirrhotic patients’ survival in the presence of latent variables, the gamma frailty distribution has been applied. In this article, it was considering the parametric model, such as Exponential and Weibull distributions for survival time. Data analysis is performed using R software, and the error level of 0.05 was considered for all tests. Results: 305 patients with liver cirrhosis including 180 (59%) men and 125 (41%) women were studied. The age average of patients was 39.8 years. At the end of the study, 82 (26%) patients died, among them 48 (58%) were men and 34 (42%) women. The main cause of liver cirrhosis was found hepatitis 'B' with 23%, followed by cryptogenic with 22.6% were identified as the second factor. Generally, 7-year’s survival was 28.44 months, for dead patients and for censoring was 19.33 and 31.79 months, respectively. Using multi-parametric survival models of progressive and regressive, Exponential and Weibull models with regard to the gamma frailty distribution were fitted to the cirrhosis data. In both models, factors including, age, bilirubin serum, albumin serum, and encephalopathy had a significant effect on survival time of cirrhotic patients. Conclusion: To investigate the effective factors for the time of patients’ death with liver cirrhosis in the presence of latent variables, gamma frailty model with parametric distributions seems desirable.

Keywords: Frailty model, latent variables, liver cirrhosis, parametric distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1045
86 Developing Optical Sensors with Application of Cancer Detection by Elastic Light Scattering Spectroscopy

Authors: May Fadheel Estephan, Richard Perks

Abstract:

Cancer is a serious health concern that affects millions of people worldwide. Early detection and treatment are essential for improving patient outcomes. However, current methods for cancer detection have limitations, such as low sensitivity and specificity. The aim of this study was to develop an optical sensor for cancer detection using elastic light scattering spectroscopy (ELSS). ELSS is a non-invasive optical technique that can be used to characterize the size and concentration of particles in a solution. An optical probe was fabricated with a 100-μm-diameter core and a 132-μm centre-to-centre separation. The probe was used to measure the ELSS spectra of polystyrene spheres with diameters of 2 μm, 0.8 μm, and 0.413 μm. The spectra were then analysed to determine the size and concentration of the spheres. The results showed that the optical probe was able to differentiate between the three different sizes of polystyrene spheres. The probe was also able to detect the presence of polystyrene spheres in suspension concentrations as low as 0.01%. The results of this study demonstrate the potential of ELSS for cancer detection. ELSS is a non-invasive technique that can be used to characterize the size and concentration of cells in a tissue sample. This information can be used to identify cancer cells and assess the stage of the disease. The data for this study were collected by measuring the ELSS spectra of polystyrene spheres with different diameters. The spectra were collected using a spectrometer and a computer. The ELSS spectra were analysed using a software program to determine the size and concentration of the spheres. The software program used a mathematical algorithm to fit the spectra to a theoretical model. The question addressed by this study was whether ELSS could be used to detect cancer cells. The results of the study showed that ELSS could be used to differentiate between different sizes of cells, suggesting that it could be used to detect cancer cells. The findings of this research show the utility of ELSS in the early identification of cancer. ELSS is a non-invasive method for characterizing the number and size of cells in a tissue sample. To determine cancer cells and determine the disease's stage, this information can be employed. Further research is needed to evaluate the clinical performance of ELSS for cancer detection.

Keywords: Elastic Light Scattering Spectroscopy, Polystyrene spheres in suspension, optical probe, fibre optics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 95