Search results for: true and false self
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1103

Search results for: true and false self

983 Effect of Smartphone Applications on Patients' Knowledge of Surgery-Related Adverse Events during Hospitalization

Authors: Eunjoo Lee

Abstract:

Background: As the number of surgeries increases, the incidence of adverse events is likely to become more prevalent. Patients who are somewhat knowledgeable about surgery-related adverse events are more likely to engage in safety initiatives to prevent them. Objectives: To evaluate the impact of a smartphone application developed during the study to enhance patients’ knowledge of surgery-related adverse events during hospitalization. Design: Non-randomized, one group, measured pre- and post-intervention. Participants: Thirty-six hospitalized patients admitted to the orthopedics unit of a general hospital in South Korea. Methods. First, a smartphone application to enhance patients’ knowledge of surgery-related adverse events was developed through an iterative process, which included a literature review, expert consultation, and pilot testing. The application was installed on participants’ smartphones, and research assistants taught the participants to use it. Twenty-five true/false questions were used to assess patients’ knowledge of preoperative precautions (eight items), surgical site infection (five items), Foley catheter management (four items), drainage management (four items), and anesthesia-related complications (four items). Results: Overall, the percentage of correct answers increased significantly, from 57.02% to 73.82%, although answers related to a few specific topics did not increase that much. Although the patients’ understanding of drainage management and the Foley catheter did increase substantially after they used the smartphone application, it was still relatively low. Conclusions: The smartphone application developed during this study enhanced the patients’ knowledge of surgery-related adverse events during hospitalization. However, nurses must make an additional effort to help patients to understand certain topics, including drainage and Foley catheter management. Relevance to clinical practice: Insufficient patient knowledge increases the risk of adverse events during hospitalization. Nurses should take active steps to enhance patients’ knowledge of a range of safety issues during hospitalization, in order to decrease the number of surgery-related adverse events.

Keywords: patient education, patient participation, patient safety, smartphone application, surgical errors

Procedia PDF Downloads 220
982 Proprioceptive Neuromuscular Facilitation Exercises of Upper Extremities Assessment Using Microsoft Kinect Sensor and Color Marker in a Virtual Reality Environment

Authors: M. Owlia, M. H. Azarsa, M. Khabbazan, A. Mirbagheri

Abstract:

Proprioceptive neuromuscular facilitation exercises are a series of stretching techniques that are commonly used in rehabilitation and exercise therapy. Assessment of these exercises for true maneuvering requires extensive experience in this field and could not be down with patients themselves. In this paper, we developed software that uses Microsoft Kinect sensor, a spherical color marker, and real-time image processing methods to evaluate patient’s performance in generating true patterns of movements. The software also provides the patient with a visual feedback by showing his/her avatar in a Virtual Reality environment along with the correct path of moving hand, wrist and marker. Primary results during PNF exercise therapy of a patient in a room environment shows the ability of the system to identify any deviation of maneuvering path and direction of the hand from the one that has been performed by an expert physician.

Keywords: image processing, Microsoft Kinect, proprioceptive neuromuscular facilitation, upper extremities assessment, virtual reality

Procedia PDF Downloads 242
981 An Estimating Parameter of the Mean in Normal Distribution by Maximum Likelihood, Bayes, and Markov Chain Monte Carlo Methods

Authors: Autcha Araveeporn

Abstract:

This paper is to compare the parameter estimation of the mean in normal distribution by Maximum Likelihood (ML), Bayes, and Markov Chain Monte Carlo (MCMC) methods. The ML estimator is estimated by the average of data, the Bayes method is considered from the prior distribution to estimate Bayes estimator, and MCMC estimator is approximated by Gibbs sampling from posterior distribution. These methods are also to estimate a parameter then the hypothesis testing is used to check a robustness of the estimators. Data are simulated from normal distribution with the true parameter of mean 2, and variance 4, 9, and 16 when the sample sizes is set as 10, 20, 30, and 50. From the results, it can be seen that the estimation of MLE, and MCMC are perceivably different from the true parameter when the sample size is 10 and 20 with variance 16. Furthermore, the Bayes estimator is estimated from the prior distribution when mean is 1, and variance is 12 which showed the significant difference in mean with variance 9 at the sample size 10 and 20.

Keywords: Bayes method, Markov chain Monte Carlo method, maximum likelihood method, normal distribution

Procedia PDF Downloads 329
980 Current Status and Prospects of Further Control of Brucellosis in Humans and Domestic Ruminants in Bangladesh

Authors: A. K. M. Anisur Rahman

Abstract:

Brucellosis is an ancient and one of the world's most widespread zoonotic diseases affecting both, public health and animal production. Its current status in humans and domestic ruminants along with probable means to control further in Bangladesh are described. The true exposure prevalence of brucellosis in cattle, goats, and sheep seems to be low: 0.3% in cattle, 1% in goats and 1.2% in sheep. The true prevalence of brucellosis in humans was also reported to be around 2%. In such a low prevalence scenario both in humans and animals, the positive predictive values of the diagnostic tests were very low. The role Brucella species in the abortion of domestic ruminants is less likely. Still now, no Brucella spp. was isolated from animal and human samples. However, Brucella abortus DNA was detected from seropositive humans, cattle, and buffalo; milk of cow, goats, and gayals and semen of an infected bull. Consuming raw milk and unpasteurized milk products by Bangladeshi people are not common. Close contact with animals, artificial insemination using semen from infected bulls, grazing mixed species of animals together in the field and transboundary animal movement are important factors, which should be considered for the further control of this zoonosis in Bangladesh.

Keywords: brucellosis, control, human, zoonosis

Procedia PDF Downloads 327
979 The Effects of Applied Negative Bias Voltage on Structure and Optical Properties of a-C:H Films

Authors: X. L. Zhou, S. Tunmee, I. Toda, K. Komatsu, S. Ohshio, H. Saitoh

Abstract:

Hydrogenated amorphous carbon (a-C:H) films have been synthesized by a radio frequency plasma enhanced chemical vapor deposition (rf-PECVD) technique with different bias voltage from 0.0 to -0.5 kV. The Raman spectra displayed the polymer-like hydrogenated amorphous carbon (PLCH) film with 0.0 to -0.1 and a-C:H films with -0.2 to -0.5 kV of bias voltages. The surface chemical information of all films were studied by X-ray photo electron spectroscopy (XPS) technique, presented to C-C (sp2 and sp3) and C-O bonds, and relative carbon (C) and oxygen (O) atomics contents. The O contamination had affected on structure and optical properties. The true density of PLCH and a-C:H films were characterized by X-ray refractivity (XRR) method, showed the result as in the range of 1.16-1.73 g/cm3 that depending on an increasing of bias voltage. The hardness was proportional to the true density of films. In addition, the optical properties i.e. refractive index (n) and extinction coefficient (k) of these films were determined by a spectroscopic ellipsometry (SE) method that give formation to in 1.62-2.10 (n) and 0.04-0.15 (k) respectively. These results indicated that the optical properties confirmed the Raman results as presenting the structure changed with applied bias voltage increased.

Keywords: negative bias voltage, a-C:H film, oxygen contamination, optical properties

Procedia PDF Downloads 446
978 A New Second Tier Screening for Congenital Adrenal Hyperplasia Utilizing One Dried Blood Spot

Authors: Engy Shokry, Giancarlo La Marca, Maria Luisa Della Bona

Abstract:

Newborn screening for Congenital Adrenal Hyperplasia (CAH) relies on quantification of 17α-hydroxyprogesterone using enzyme immunoassays. These assays, in spite of being rapid, readily available and easy to perform, its reliability was found questionable due to lack of selectivity and specificity resulting in large number of false-positives, consequently family anxiety and associated hospitalization costs. To improve specificity of conventional 17α-hydroxyprogesterone screening which may experience false transient elevation in preterm, low birth weight or acutely ill neonates, steroid profiling by LC-MS/MS as a second-tier test was implemented. Unlike the previously applied LC-MS/MS methods, with the disadvantage of requiring a relatively high number of blood drops. Since newborn screening tests are increasing, it is necessary to minimize the sample volume requirement to make the maximum use of blood samples collected on filter paper. The proposed new method requires just one 3.2 mm dried blood spot (DBS) punch. Extraction was done using methanol: water: formic acid (90:10:0.1, v/v/v) containing deuterium labelled internal standards. Extracts were evaporated and reconstituted in 10 % acetone in water. Column switching strategy for on-line sample clean-up was applied to improve the chromatographic run. The first separative step retained the investigated steroids and passed through the majority of high molecular weight impurities. After the valve switching, the investigated steroids are back flushed from the POROS® column onto the analytical column and separated using gradient elution. Found quantitation limits were 5, 10 and 50 nmol/L for 17α-hydroxyprogesterone, androstenedione and cortisol respectively with mean recoveries of between 98.31-103.24 % and intra-/ inter-assay CV% < 10 % except at LLOQ. The method was validated using standard addition calibration and isotope dilution strategies. Reference ranges were determined by analysing samples from 896 infants of various ages at the time of sample collection. The method was also applied on patients with confirmed CAH. Our method represents an attractive combination of low sample volume requirement, minimal sample preparation time without derivatization and quick chromatography (5 min). The three steroid profile and the concentration ratios (17OHP + androstenedione/cortisol) allowed better screening outcomes of CAH reducing false positives, associated costs and anxiety.

Keywords: congenital adrenal hyperplasia (CAH), 17α-hydroxyprogesterone, androstenedione, cortisol, LC-MS/MS

Procedia PDF Downloads 408
977 Centre of the Milky Way Galaxy

Authors: Svanik Garg

Abstract:

The center of our galaxy is often referred to as the ‘galactic center’ and has many theories associated with its true nature. Given the existence of interstellar dust and bright stars, it is nearly impossible to observe its position, about 24,000 light-years away. Due to this uncertainty, humans have often speculated what could exist at a vantage point upon which the entire galaxy spirals and revolves, with wild theories ranging from the presence of dark matter to black holes and wormholes. Data up till now on the same is very limited, and conclusions are to the best of the author's knowledge, as the only method to view the galactic center is through x-ray and infrared imaging, which counter the problems mentioned earlier. This paper examines, first, the existence of a galactic center, then the methods to identify what it might contain, and lastly, possible conclusions along with implications of the findings. Several secondary sources, along with a python tool to analyze x-ray readings were used to identify the true nature of what lies in the center of the galaxy, whether it be a void due to the existence of dark energy or a black hole. Using this roughly 4-part examination, as a result of this study, a plausible definition of the galactic center was formulated, keeping in mind the rather wild theories, data and different ideas proposed by researchers. This paper aims to dissect the theory of a galactic center and identify its nature to help understand what it shows about galaxies and our universe.

Keywords: milky way, galaxy, dark energy, stars

Procedia PDF Downloads 92
976 Arguments against Innateness of Theory of Mind

Authors: Arkadiusz Gut, Robert Mirski

Abstract:

The nativist-constructivist debate constitutes a considerable part of current research on mindreading. Peter Carruthers and his colleagues are known for their nativist position in the debate and take issue with constructivist views proposed by other researchers, with Henry Wellman, Alison Gopnik, and Ian Apperly at the forefront. More specifically, Carruthers together with Evan Westra propose a nativistic explanation of Theory of Mind Scale study results that Wellman et al. see as supporting constructivism. While allowing for development of the innate mindreading system, Westra and Carruthers base their argumentation essentially on a competence-performance gap, claiming that cross-cultural differences in Theory of Mind Scale progression as well as discrepancies between infants’ and toddlers’ results on verbal and non-verbal false-belief tasks are fully explainable in terms of acquisition of other, pragmatic, cognitive developments, which are said to allow for an expression of the innately present Theory of Mind understanding. The goal of the present paper is to bring together arguments against the view offered by Westra and Carruthers. It will be shown that even though Carruthers et al.’s interpretation has not been directly controlled for in Wellman et al.’s experiments, there are serious reasons to dismiss such nativistic views which Carruthers et al. advance. The present paper discusses the following issues that undermine Carruthers et al.’s nativistic conception: (1) The concept of innateness is argued to be developmentally inaccurate; it has been dropped in many biological sciences altogether and many developmental psychologists advocate for doing the same in cognitive psychology. Reality of development is a complex interaction of changing elements that is belied by the simplistic notion of ‘the innate.’ (2) The purported innate mindreading conceptual system posited by Carruthers ascribes adult-like understanding to infants, ignoring the difference between first- and second-order understanding, between what can be called ‘presentation’ and ‘representation.’ (3) Advances in neurobiology speak strongly against any inborn conceptual knowledge; neocortex, where conceptual knowledge finds its correlates, is said to be largely equipotential at birth. (4) Carruthers et al.’s interpretations are excessively charitable; they extend results of studies done with 15-month-olds to conclusions about innateness, whereas in reality at that age there has been plenty of time for construction of the skill. (5) Looking-time experiment paradigm used in non-verbal false belief tasks that provide the main support for Carruthers’ argumentation has been criticized on methodological grounds. In the light of the presented arguments, nativism in theory of mind research is concluded to be an untenable position.

Keywords: development, false belief, mindreading, nativism, theory of mind

Procedia PDF Downloads 184
975 Depth of Penetration and Nature of Interferential Current in Cutaneous, Subcutaneous and Muscle Tissues

Authors: A. Beatti, L. Chipchase, A. Rayner, T. Souvlis

Abstract:

The aims of this study were to investigate the depth of interferential current (IFC) penetration through soft tissue and to investigate the area over which IFC spreads during clinical application. Premodulated IFC and ‘true’ IFC at beat frequencies of 4, 40 and 90Hz were applied via four electrodes to the distal medial thigh of 15 healthy subjects. The current was measured via three Teflon coated fine needle electrodes that were inserted into the superficial layer of skin, then into the subcutaneous tissue (≈1 cm deep) and then into muscle tissue (≈2 cm deep). The needle electrodes were placed in the middle of the four IFC electrodes, between two channels and outside the four electrodes. Readings were taken at each tissue depth from each electrode during each treatment frequency then digitized and stored for analysis. All voltages were greater at all depths and locations than baseline (p < 0.01) and voltages decreased with depth (P=0.039). Lower voltages of all currents were recorded in the middle of the four electrodes with the highest voltage being recorded outside the four electrodes in all depths (P=0.000).For each frequency of ‘true’ IFC, the voltage was higher in the superficial layer outside the electrodes (P ≤ 0.01).Premodulated had higher voltages along the line of one circuit (P ≤ 0.01). Clinically, IFC appears to pass through skin layers to depth and is more efficient than premodulated IFC when targeting muscle tissue.

Keywords: electrotherapy, interferential current, interferential therapy, medium frequency current

Procedia PDF Downloads 318
974 Ill-Posed Inverse Problems in Molecular Imaging

Authors: Ranadhir Roy

Abstract:

Inverse problems arise in medical (molecular) imaging. These problems are characterized by large in three dimensions, and by the diffusion equation which models the physical phenomena within the media. The inverse problems are posed as a nonlinear optimization where the unknown parameters are found by minimizing the difference between the predicted data and the measured data. To obtain a unique and stable solution to an ill-posed inverse problem, a priori information must be used. Mathematical conditions to obtain stable solutions are established in Tikhonov’s regularization method, where the a priori information is introduced via a stabilizing functional, which may be designed to incorporate some relevant information of an inverse problem. Effective determination of the Tikhonov regularization parameter requires knowledge of the true solution, or in the case of optical imaging, the true image. Yet, in, clinically-based imaging, true image is not known. To alleviate these difficulties we have applied the penalty/modified barrier function (PMBF) method instead of Tikhonov regularization technique to make the inverse problems well-posed. Unlike the Tikhonov regularization method, the constrained optimization technique, which is based on simple bounds of the optical parameter properties of the tissue, can easily be implemented in the PMBF method. Imposing the constraints on the optical properties of the tissue explicitly restricts solution sets and can restore uniqueness. Like the Tikhonov regularization method, the PMBF method limits the size of the condition number of the Hessian matrix of the given objective function. The accuracy and the rapid convergence of the PMBF method require a good initial guess of the Lagrange multipliers. To obtain the initial guess of the multipliers, we use a least square unconstrained minimization problem. Three-dimensional images of fluorescence absorption coefficients and lifetimes were reconstructed from contact and noncontact experimentally measured data.

Keywords: constrained minimization, ill-conditioned inverse problems, Tikhonov regularization method, penalty modified barrier function method

Procedia PDF Downloads 247
973 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 145
972 Motivational Factors for the Practice of Exercise in a Sample of Portuguese Fitness Center Users

Authors: N. Sena, C. Vasconcelos

Abstract:

Portugal has a lower rate of people who exercise. Fitness centers are a widely recognized context for the performance of an exercise. Thus, the objective of this study is to analyze the motivational factors for the practice of exercise in a sample of Portuguese fitness center users. The sample consists of 34 users (23 men and 11 women), aged between 16 and 60 years old (24.7 ± 11,5 years old). The instrument used for data collection was the Motivation Questionnaire for Exercise (version translated and validated into Portuguese), consisting of forty-nine items grouped into ten motivational factors. Responses to the Exercise Motivation Questionnaire are given on a 6-point Likert scale (0="not at all true for me" to 5="completely true for me"). With regard to the results, it is possible to verify that the motivational factors considered most relevant by the sample of our study were “Well-being” (4.44 ± 0.28), followed by “Health” (4.29 ± 0.57) and “Stress Management” (4.06 ± 0.54). The factors “Affiliation” (3.11 ± 0.49) “Personal Appreciation” (2.26 ± 0.59) and “Medical History” (1.71 ± 0.74) were considered by the respondents to be the least important factors for performing the exercise. The conclusion of this study is that in the sample of this study, the factors that most motivated the practice of exercise were “Well-being”, “Health” and “Stress Management”. In the opposite direction, the factors that least motivated the individuals in this sample to practice exercise were “Affiliation”, “Personal Appreciation” and “Medical History”.

Keywords: exercise, fitness center users, motivational factors, Portugal

Procedia PDF Downloads 55
971 Capnography for Detection of Return of Spontaneous Circulation Pseudo-Pea

Authors: Yiyuan David Hu, Alex Lindqwister, Samuel B. Klein, Karen Moodie, Norman A. Paradis

Abstract:

Introduction: Pseudo-Pulseless Electrical Activity (p-PEA) is a lifeless form of profound cardiac shock characterized by measurable cardiac mechanical activity without clinically detectable pulses. Patients in pseudo-PEA carry different prognoses than those in true PEA and may require different therapies. End-tidal carbon dioxide (ET-CO2) is a reliable indicator of the return of spontaneous circulation (ROSC) in ventricular fibrillation and true-PEA but has not been studied p-PEA. Hypothesis: ET-CO2 can be used as an independent indicator of ROSC in p-PEA resuscitation. Methods: 30kg female swine (N = 14) under intravenous anesthesia were instrumented with aortic and right atrial micromanometer pressure. ECG and ET-CO2 were measured continuously. p-PEA was induced by ventilation with 6% oxygen in 94% nitrogen and was defined as a systolic Ao less than 40 mmHg. The statistical relationships between ET-CO2 and ROSC are reported. Results: ET-CO2 during resuscitation strongly correlated with ROSC (Figure 1). Mean ET-CO2 during p-PEA was 28.4 ± 8.4, while mean ET-CO2 in ROSC for 100% O2 cohort was 42.2 ± 12.6 (p < 0.0001), mean ET-CO2 in ROSC for 100% O2 + CPR was 33.0 ± 15.4 (p < 0.0001). Analysis of slope was limited to one minute of resuscitation data to capture local linearity; assessment began 10 seconds after resuscitation started to allow the ventilator to mix 100% O2. Pigs who would recover with 100% O2 had a slope of 0.023 ± 0.001, oxygen + CPR had a slope of 0.018 ± 0.002, and oxygen + CPR + epinephrine had a slope of 0.0050 ± 0.0009. Conclusions: During resuscitation from porcine hypoxic p-PEA, a rise in ET-CO2 is indicative of ROSC.

Keywords: ET-CO2, resuscitation, capnography, pseudo-PEA

Procedia PDF Downloads 161
970 From Script to Film: The Fading Voice of the Screenwriter

Authors: Ana Sofia Torres Pereira

Abstract:

On January 15th 2015, Peter Bart, editor in chief of Variety Magazine, published an article in the aforementioned magazine posing the following question “Are screenwriters becoming obsolete in Hollywood?” Is Hollywood loosing its interest in well plotted, well written scripts crafted by professionals? That screenwriters have been undervalued, forgotten and left behind since the begging of film, is a well-known fact, but ate they now at the brink of extinction? If fiction films are about people, stories, so, simply put, all about the script, what does it mean to say that the screenwriter is becoming obsolete? What will be the consequences of the possible death of the screenwriter for the cinema world? All of these questions lead us to an ultimate one: What is the true importance of a screenwriter? What can a screenwriter do that a director, for instance, can’t? How should a script be written and read in order not to become obsolete? And what about those countries, like Portugal, for example, in which the figure of the screenwriter is yet to be heard and known? How can screenwriters find their voice in a world driven by the tyrannical voice of the Director? In a demanding cinema world where the Director is considered the author of a film, it’s important to know where we can find the voice of the screenwriter, the true language of the screenplay and the importance this voice and specific language might have for the future of story telling and of film. In a paper that admittedly poses more questions than answers, I will try to unveil the importance a screenplay might have in Hollywood, in Portugal and in the cinema and communication world in general.

Keywords: cinema, communication, director, language, screenplay, screenwriting, story

Procedia PDF Downloads 289
969 Representations of Wolves (Canis lupus) in Feature Films: The Detailed Analysis of the Text and Picture in the Chosen Movies

Authors: Barbara Klimek

Abstract:

Wolves are one of the most misrepresented species in literature and the media. They’re often portrayed as vicious, man-eating beasts whose main life goal is to hunt and kill people. Many movie directors use wolves as their main characters in different types of films, especially horror, thriller and science fiction movies to create gore and fear. This, in turn, results in people being afraid of wolves and wanting to destroy them. Such cultural creations caused wolves being stalked, abused and killed by people and in many areas they were completely destroyed. This paper analyzes the representations of wolves in the chosen films in the four main portrayed aspects: 1. the overall picture – true versus false, positive versus negative, based on stereotypes or realistic, displaying wolf behavior typical of the species or fake 2. subjectivity – how humans treat and talk about the animals – as subjects or as objects 3. animal welfare – how humans treat wolves and nature, are the human – animal relations positive and appropriate or negative and abusive 4. empathy – are human characters shown to co-feel the suffering with the wolves, do they display signs of empathy towards the animals, do the animals empathize with humans? The detailed analysis of the text and pictures presented in the chosen films concludes that wolves are especially misrepresented in the movies. Their behavior is shown as fake and negative, based on stereotypes and myths, the human – animal relations are shown mainly as negative where people fear the animals and hunt them and wolves stalk, follow, attack and kill humans. It shows that people do not understand the needs of these animals and are unable to show empathy towards them. The article will discuss the above-mentioned study results in detail and will present many examples. Animal representations in cultural creations, including film have a great impact on how people treat particular species of animals. The media shape people’s attitudes, what in turn results in people either respecting and protecting the animals or fearing, disliking and destroying the particular species.

Keywords: film, movies, representations, wolves

Procedia PDF Downloads 174
968 Modified Lot Quality Assurance Sampling (LQAS) Model for Quality Assessment of Malaria Parasite Microscopy and Rapid Diagnostic Tests in Kano, Nigeria

Authors: F. Sarkinfada, Dabo N. Tukur, Abbas A. Muaz, Adamu A. Yahuza

Abstract:

Appropriate Quality Assurance (QA) of parasite-based diagnosis of malaria to justify Artemisinin-based Combination Therapy (ACT) is essential for Malaria Programmes. In Low and Middle Income Countries (LMIC), resource constrain appears to be a major challenge in implementing the conventional QA system. We designed and implemented a modified LQAS model for QA of malaria parasite (MP) microscopy and RDT in a State Specialist Hospital (SSH) and a University Health Clinic (UHC) in Kano, Nigeria. The capacities of both facilities for MP microscopy and RDT were assessed before implementing a modified LQAS over a period of 3 months. Quality indicators comprising the qualities of blood film and staining, MP positivity rates, concordance rates, error rates (in terms of false positives and false negatives), sensitivity and specificity were monitored and evaluated. Seventy one percent (71%) of the basic requirements for malaria microscopy was available in both facilities, with the absence of certifies microscopists, SOPs and Quality Assurance mechanisms. A daily average of 16 to 32 blood samples were tested with a blood film staining quality of >70% recorded in both facilities. Using microscopy, the MP positivity rates were 50.46% and 19.44% in SSH and UHS respectively, while the MP positivity rates were 45.83% and 22.78% in SSH and UHS when RDT was used. Higher concordance rates of 88.90% and 93.98% were recorded in SSH and UHC respectively using microscopy, while lower rates of 74.07% and 80.58% in SSH and UHC were recorded when RDT was used. In both facilities, error rates were higher when RDT was used than with microscopy. Sensitivity and specificity were higher when microscopy was used (95% and 84% in SSH; 94% in UHC) than when RDT was used (72% and 76% in SSH; 78% and 81% in UHC). It could be feasible to implement an integrated QA model for MP microscopy and RDT using modified LQAS in Malaria Control Programmes in Low and Middle Income Countries that might have resource constrain for parasite-base diagnosis of malaria to justify ACT treatment.

Keywords: malaria, microscopy, quality assurance, RDT

Procedia PDF Downloads 198
967 Development of an Interactive and Robust Image Analysis and Diagnostic Tool in R for Early Detection of Cervical Cancer

Authors: Kumar Dron Shrivastav, Ankan Mukherjee Das, Arti Taneja, Harpreet Singh, Priya Ranjan, Rajiv Janardhanan

Abstract:

Cervical cancer is one of the most common cancer among women worldwide which can be cured if detected early. Manual pathology which is typically utilized at present has many limitations. The current gold standard for cervical cancer diagnosis is exhaustive and time-consuming because it relies heavily on the subjective knowledge of the oncopathologists which leads to mis-diagnosis and missed diagnosis resulting false negative and false positive. To reduce time and complexities associated with early diagnosis, we require an interactive diagnostic tool for early detection particularly in developing countries where cervical cancer incidence and related mortality is high. Incorporation of digital pathology in place of manual pathology for cervical cancer screening and diagnosis can increase the precision and strongly reduce the chances of error in a time-specific manner. Thus, we propose a robust and interactive cervical cancer image analysis and diagnostic tool, which can categorically process both histopatholgical and cytopathological images to identify abnormal cells in the least amount of time and settings with minimum resources. Furthermore, incorporation of a set of specific parameters that are typically referred to for identification of abnormal cells with the help of open source software -’R’ is one of the major highlights of the tool. The software has the ability to automatically identify and quantify the morphological features, color intensity, sensitivity and other parameters digitally to differentiate abnormal from normal cells, which may improve and accelerate screening and early diagnosis, ultimately leading to timely treatment of cervical cancer.

Keywords: cervical cancer, early detection, digital Pathology, screening

Procedia PDF Downloads 143
966 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 99
965 Effects of Hypoxic Duration at Different Growth Stages on Yield Potential of Waxy Corn (Zea mays L.)

Authors: S. Boonlertnirun, R. Suvannasara, K. Boonlertnirun

Abstract:

Hypoxia has negative effects on growth and crop yield, its severity is so varied depending on crop growth stages, duration of hypoxia and crop species. The objective was to evaluate the sensitive growth stage and the duration of hypoxia negatively affecting growth and yield of waxy corn. Pot experiment was conducted using a split plot in randomized complete block with 3 growth stages: V3 (3-4 true leaves), V7 (7-8 true leaves), and R1 (silking stage), and three hypoxic durations: 6, 9, and 12 days, in an open–ended outdoor greenhouse during January to March 2013. The results revealed that different growth stages had significantly (p < 0.5) different responses to hypoxia, seeing that the sensitive growth stage affecting plant height, yield and yield components was mostly detected in V7 growth stage whereas leaf greenness and days to silking were sensitive to hypoxia at R1 growth stage. Different hypoxic durations significantly affected the yield and yield components, hypoxic duration of twelve days showed the most negative effect greater than the others. In this present study, it can be concluded that waxy corn plants were waterlogged at V7 growth stage for twelve days had the most negative effect on yield and yield components.

Keywords: hypoxia duration, waxy corn, growth stage, Zea mays L.

Procedia PDF Downloads 359
964 Domain-Specific Deep Neural Network Model for Classification of Abnormalities on Chest Radiographs

Authors: Nkechinyere Joy Olawuyi, Babajide Samuel Afolabi, Bola Ibitoye

Abstract:

This study collected a preprocessed dataset of chest radiographs and formulated a deep neural network model for detecting abnormalities. It also evaluated the performance of the formulated model and implemented a prototype of the formulated model. This was with the view to developing a deep neural network model to automatically classify abnormalities in chest radiographs. In order to achieve the overall purpose of this research, a large set of chest x-ray images were sourced for and collected from the CheXpert dataset, which is an online repository of annotated chest radiographs compiled by the Machine Learning Research Group, Stanford University. The chest radiographs were preprocessed into a format that can be fed into a deep neural network. The preprocessing techniques used were standardization and normalization. The classification problem was formulated as a multi-label binary classification model, which used convolutional neural network architecture to make a decision on whether an abnormality was present or not in the chest radiographs. The classification model was evaluated using specificity, sensitivity, and Area Under Curve (AUC) score as the parameter. A prototype of the classification model was implemented using Keras Open source deep learning framework in Python Programming Language. The AUC ROC curve of the model was able to classify Atelestasis, Support devices, Pleural effusion, Pneumonia, A normal CXR (no finding), Pneumothorax, and Consolidation. However, Lung opacity and Cardiomegaly had a probability of less than 0.5 and thus were classified as absent. Precision, recall, and F1 score values were 0.78; this implies that the number of False Positive and False Negative is the same, revealing some measure of label imbalance in the dataset. The study concluded that the developed model is sufficient to classify abnormalities present in chest radiographs into present or absent.

Keywords: transfer learning, convolutional neural network, radiograph, classification, multi-label

Procedia PDF Downloads 84
963 Seismic Response Control of 20-Storey Benchmark Building Using True Negative Stiffness Device

Authors: Asim Qureshi, R. S. Jangid

Abstract:

Seismic response control of structures is generally achieved by using control devices which either dissipate the input energy or modify the dynamic properties of structure.In this paper, the response of a 20-storey benchmark building supplemented by viscous dampers and Negative Stiffness Device (NSD) is assessed by numerical simulations using the Newmark-beta method. True negative stiffness is an adaptive passive device which assists the motion unlike positive stiffness. The structure used in this study is subjected to four standard ground motions varying from moderate to severe, near fault to far-field earthquakes. The objective of the present study is to show the effectiveness of the adaptive negative stiffness device (NSD and passive dampers together) relative to passive dampers alone. This is done by comparing the responses of the above uncontrolled structure (i.e., without any device) with the structure having passive dampers only and also with the structure supplemented with adaptive negative stiffness device. Various performance indices, top floor displacement, top floor acceleration and inter-storey drifts are used as comparison parameters. It is found that NSD together with passive dampers is quite effective in reducing the response of aforementioned structure relative to structure without any device or passive dampers only. Base shear and acceleration is reduced significantly by incorporating NSD at the cost of increased inter-storey drifts which can be compensated using the passive dampers.

Keywords: adaptive negative stiffness device, apparent yielding, NSD, passive dampers

Procedia PDF Downloads 394
962 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis

Authors: H. Jung, N. Kim, B. Kang, J. Choe

Abstract:

History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.

Keywords: history matching, principal component analysis, reservoir modelling, support vector machine

Procedia PDF Downloads 134
961 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing

Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson

Abstract:

Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).

Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation

Procedia PDF Downloads 55
960 A Rule Adumbrated: Bailment on Terms

Authors: David Gibbs-Kneller

Abstract:

Only parties to a contract can enforce it. This is the privity of the contract. Carriage contracts frequently involve intermediated relationships. While the carrier and cargo-owner will agree on a contract for carriage, there is no privity or consideration between the cargo-owner and third parties. To overcome this, the contract utilizes ‘bailment on terms’ or the rule in Morris. Morris v C W Martin & Sons Ltd is authority for the following: A sub-bailee and bailor may rely on terms of a bailment where the bailor has consented to sub-bailment “on terms”. Bailment on terms can play a significant part in making litigation decisions and determining liability. It is used in standard form contracts and courts have also strived to find consent to bailment on terms in agreements so as to avoid the consequences of privity of contract. However, what this paper exposes is the false legal basis for this model. Lord Denning gave an account adumbrated of the law of bailments to justify the rule in Morris. What Lord Denning was really doing was objecting to the doctrine of privity. To do so, he wrongly asserted there was a lacuna in law that meant third parties could not avail themselves upon terms of a contract. Next, he provided a false analogy between purely contractual rights and possessory liens. Finally, he gave accounts of authorities to say they supported the rule in Morris when they did not. Surprisingly, subsequent case law on the point has not properly engaged with this reasoning. The Pioneer Container held that since the rule in Morris lay in bailments, the decision is not dependent on the doctrine of privity. Yet the basis for this statement was Morris. Once these reasons have been discounted, all bailment on terms rests on is the claim that the law of bailments is an independent source of law. Bailment on terms should not be retained, for it is contrary to established principles in the law of property, tort, and contract. That undermines the certainty of those principles by risking their collapse because there is nothing that keeps bailment on terms within the confines of bailments only. As such, bailment on terms is not good law and should not be used in standard form contracts or by the courts as a means of determining liability. If bailment on terms is a pragmatic rule to retain, it is recommended that rules governing carriage contracts should be amended.

Keywords: bailment, carriage of goods, contract law, privity

Procedia PDF Downloads 159
959 Being Authentic is the New “Pieces”: A Mixed Methods Study on Authenticity among African Christian Millennials

Authors: Victor Counted

Abstract:

Staying true to self is complicated. In most cases, we might not fully come to terms with this realities. Just like any journey, a self-discovery experience with the ‘self’, is like a rollercoaster ride. The researcher attempts to engage the reader in an empirical study on authenticity tendencies of African Christian Millennials. Hence, attempting the all-important question: What does it actually mean to be true to self for the African youth? A comprehensive, yet an unfinished business that applies the authenticity theory in its exploratory navigations to uncover the “lived world” of the participants who were part of this study. Using a mixed methods approach, the researcher will exhaustively give account to the authenticity tendencies and experiences of the respondents in the study by providing the reader with a unique narrative for understanding what it means to be true to oneself in Africa. At the quantitative study, the participants recorded higher scores on the Authenticity Scale (AS) authentic living, while showing a significant correlation within the subscales. Hypotheses were tested at the quantitative phase, which statistically supported gender and church affiliation as possible predictors for the authenticity orientations of the participants, while being a Christian native and race/ethnicity were not impact factors statistically. The results helped the researcher to develop the objectives behind the qualitative study, where only fifteen AS-authentic living participants were interviewed to understand why they scored high on authentic living, in order to understand what it means to be authentic. The hallmark of the qualitative case study exploration was the common coping mechanism of splitting adopted by the respondents to deal with their self-crisis as they tried to remain authentic to self, whilst self-regulating and self-investing the self to discover ‘self’. Specifically, the researcher observed the concurrent utilization of some kind of the religious-self by the respondents to regulate their self crisis, as they relate with self fragmenting through different splitting stages in hope for some kind of redemption. It was an explanation that led to the conclusion that being authentic is the new pieces. Authenticity is in fragments. This proposition led the researcher to introduce a hermeneutical support-system that will enable future researchers engage more critically and responsibly with their “living human documents” in order to inspire timely solutions that resolve the concerns of authenticity and wellbeing among Millennials in Africa.

Keywords: authenticity, self, identity, self-fragmentation, weak self integration, postmodern self, splitting

Procedia PDF Downloads 492
958 Characterization of Sorption Behavior and Mass Transfer Properties of Four Central Africa Tropical Woods

Authors: Merlin Simo Tagne, Romain Rémond

Abstract:

This study provides the sorption isotherm, its hysteresis and their mass transfer properties of four Central Africa Tropical woods largely used for building construction: frake, lotofa, sapelle and ayous. Characterization of these three species in particular and Central Africa tropical woods, in general, was necessary to develop conservation and treatment of wood after first transformation using the drying. Isotherms were performed using a dynamic vapor sorption apparatus (Surface Measurement Systems) at 20 and 40°C. The mass diffusivity was determined in steady state using a specific vapometer. Permeability was determined using a specialized device developed to measure over a wide range of permeability values. Permeability and mass transfer properties are determined in the tangential direction with a ‘false’ quartersawn cutting (sapelle and lotofa) and in the radial direction with a ‘false’ flatsawn cutting (ayous and frake). The sample of sapelle, ayous and frake are heartwood when lotofa contains as well as heartwood than sapwood. Results obtained showed that the temperature effect on sorption behavior was low than relative humidity effect. We also observed a low difference between the sorption behavior of our woods and hysteresis of sorption decreases when the temperature increases. Hailwood-Horrobin model’s predicts the isotherms of adsorption and desorption of ours woods and parameters of this model are proposed. Results on the characterization of mass transfer properties showed that, in the steady state, mass diffusivity decreases exponentially when basal density increases. In the phase of desorption, mass diffusivity is great than in the phase of adsorption. The permeability of ours woods are greater than Australian hardwoods but lower than temperate woods. It is difficult to define a relationship between permeability and mass diffusivity.

Keywords: tropical woods, sorption isotherm, diffusion coefficient, gas permeability, Central Africa

Procedia PDF Downloads 462
957 BeamGA Median: A Hybrid Heuristic Search Approach

Authors: Ghada Badr, Manar Hosny, Nuha Bintayyash, Eman Albilali, Souad Larabi Marie-Sainte

Abstract:

The median problem is significantly applied to derive the most reasonable rearrangement phylogenetic tree for many species. More specifically, the problem is concerned with finding a permutation that minimizes the sum of distances between itself and a set of three signed permutations. Genomes with equal number of genes but different order can be represented as permutations. In this paper, an algorithm, namely BeamGA median, is proposed that combines a heuristic search approach (local beam) as an initialization step to generate a number of solutions, and then a Genetic Algorithm (GA) is applied in order to refine the solutions, aiming to achieve a better median with the smallest possible reversal distance from the three original permutations. In this approach, any genome rearrangement distance can be applied. In this paper, we use the reversal distance. To the best of our knowledge, the proposed approach was not applied before for solving the median problem. Our approach considers true biological evolution scenario by applying the concept of common intervals during the GA optimization process. This allows us to imitate a true biological behavior and enhance genetic approach time convergence. We were able to handle permutations with a large number of genes, within an acceptable time performance and with same or better accuracy as compared to existing algorithms.

Keywords: median problem, phylogenetic tree, permutation, genetic algorithm, beam search, genome rearrangement distance

Procedia PDF Downloads 240
956 Fault Prognostic and Prediction Based on the Importance Degree of Test Point

Authors: Junfeng Yan, Wenkui Hou

Abstract:

Prognostics and Health Management (PHM) is a technology to monitor the equipment status and predict impending faults. It is used to predict the potential fault and provide fault information and track trends of system degradation by capturing characteristics signals. So how to detect characteristics signals is very important. The select of test point plays a very important role in detecting characteristics signal. Traditionally, we use dependency model to select the test point containing the most detecting information. But, facing the large complicated system, the dependency model is not built so easily sometimes and the greater trouble is how to calculate the matrix. Rely on this premise, the paper provide a highly effective method to select test point without dependency model. Because signal flow model is a diagnosis model based on failure mode, which focuses on system’s failure mode and the dependency relationship between the test points and faults. In the signal flow model, a fault information can flow from the beginning to the end. According to the signal flow model, we can find out location and structure information of every test point and module. We break the signal flow model up into serial and parallel parts to obtain the final relationship function between the system’s testability or prediction metrics and test points. Further, through the partial derivatives operation, we can obtain every test point’s importance degree in determining the testability metrics, such as undetected rate, false alarm rate, untrusted rate. This contributes to installing the test point according to the real requirement and also provides a solid foundation for the Prognostics and Health Management. According to the real effect of the practical engineering application, the method is very efficient.

Keywords: false alarm rate, importance degree, signal flow model, undetected rate, untrusted rate

Procedia PDF Downloads 356
955 NGOs from the Promotion of Civic Participation to Public Problems Solving: Case Study Urmia, Iran

Authors: Amin Banae Babazadeh

Abstract:

In the contemporary world, NGOs are considered as important tool for motivating the community. So they committed their true mission and the promotion of civic participation and strengthen social identities. Functional characteristics of non-governmental organizations are the element to leverage the centers of political and social development of powerful governments since they are concrete and familiar with the problems of society and the operational strategies which would facilitate this process of mutual trust between the people and organizations. NGOs on the one hand offer reasonable solutions in line with approved organizations as agents to match between the facts and reality of society and on the other hand changes to a tool to have true political, social and economic behavior. However, the NGOs are active in the formulation of national relations and policy formulation in an organized and disciplined based on three main factors, i.e., resources, policies, and institutions. Organizations are not restricted to state administration in centralized system bodies and this process in the democratic system limits the accumulation of desires and expectations and at the end reaches to the desired place. Hence, this research will attempt to emphasis on field research (questionnaire) and according to the development evolution and role of NGOs analyze the effects of this center on youth. Therefore, the hypothesis is that there is a direct relationship between the Enlightenment and the effectiveness of policy towards NGOs and solving social damages.

Keywords: civic participation, community vulnerability, insightful, NGO, urmia

Procedia PDF Downloads 218
954 Classification of Barley Varieties by Artificial Neural Networks

Authors: Alper Taner, Yesim Benal Oztekin, Huseyin Duran

Abstract:

In this study, an Artificial Neural Network (ANN) was developed in order to classify barley varieties. For this purpose, physical properties of barley varieties were determined and ANN techniques were used. The physical properties of 8 barley varieties grown in Turkey, namely thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain, were determined and it was found that these properties were statistically significant with respect to varieties. As ANN model, three models, N-l, N-2 and N-3 were constructed. The performances of these models were compared. It was determined that the best-fit model was N-1. In the N-1 model, the structure of the model was designed to be 11 input layers, 2 hidden layers and 1 output layer. Thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain were used as input parameter; and varieties as output parameter. R2, Root Mean Square Error and Mean Error for the N-l model were found as 99.99%, 0.00074 and 0.009%, respectively. All results obtained by the N-l model were observed to have been quite consistent with real data. By this model, it would be possible to construct automation systems for classification and cleaning in flourmills.

Keywords: physical properties, artificial neural networks, barley, classification

Procedia PDF Downloads 146