Search results for: speed detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6049

Search results for: speed detection

259 Innovative Technologies Functional Methods of Dental Research

Authors: Sergey N. Ermoliev, Margarita A. Belousova, Aida D. Goncharenko

Abstract:

Application of the diagnostic complex of highly informative functional methods (electromyography, reodentography, laser Doppler flowmetry, reoperiodontography, vital computer capillaroscopy, optical tissue oximetry, laser fluorescence diagnosis) allows to perform a multifactorial analysis of the dental status and to prescribe complex etiopathogenetic treatment. Introduction. It is necessary to create a complex of innovative highly informative and safe functional diagnostic methods for improvement of the quality of patient treatment by the early detection of stomatologic diseases. The purpose of the present study was to investigate the etiology and pathogenesis of functional disorders identified in the pathology of hard tissue, dental pulp, periodontal, oral mucosa and chewing function, and the creation of new approaches to the diagnosis of dental diseases. Material and methods. 172 patients were examined. Density of hard tissues of the teeth and jaw bone was studied by intraoral ultrasonic densitometry (USD). Electromyographic activity of masticatory muscles was assessed by electromyography (EMG). Functional state of dental pulp vessels assessed by reodentography (RDG) and laser Doppler flowmetry (LDF). Reoperiodontography method (RPG) studied regional blood flow in the periodontal tissues. Microcirculatory vascular periodontal studied by vital computer capillaroscopy (VCC) and laser Doppler flowmetry (LDF). The metabolic level of the mucous membrane was determined by optical tissue oximetry (OTO) and laser fluorescence diagnosis (LFD). Results and discussion. The results obtained revealed changes in mineral density of hard tissues of the teeth and jaw bone, the bioelectric activity of masticatory muscles, regional blood flow and microcirculation in the dental pulp and periodontal tissues. LDF and OTO methods estimated fluctuations of saturation level and oxygen transport in microvasculature of periodontal tissues. With LFD identified changes in the concentration of enzymes (nicotinamide, flavins, lipofuscin, porphyrins) involved in metabolic processes Conclusion. Our preliminary results confirmed feasibility and safety the of intraoral ultrasound densitometry technique in the density of bone tissue of periodontium. Conclusion. Application of the diagnostic complex of above mentioned highly informative functional methods allows to perform a multifactorial analysis of the dental status and to prescribe complex etiopathogenetic treatment.

Keywords: electromyography (EMG), reodentography (RDG), laser Doppler flowmetry (LDF), reoperiodontography method (RPG), vital computer capillaroscopy (VCC), optical tissue oximetry (OTO), laser fluorescence diagnosis (LFD)

Procedia PDF Downloads 253
258 An Observation Approach of Reading Order for Single Column and Two Column Layout Template

Authors: In-Tsang Lin, Chiching Wei

Abstract:

Reading order is an important task in many digitization scenarios involving the preservation of the logical structure of a document. From the paper survey, it finds that the state-of-the-art algorithm could not fulfill to get the accurate reading order in the portable document format (PDF) files with rich formats, diverse layout arrangement. In recent years, most of the studies on the analysis of reading order have targeted the specific problem of associating layout components with logical labels, while less attention has been paid to the problem of extracting relationships the problem of detecting the reading order relationship between logical components, such as cross-references. Over 3 years of development, the company Foxit has demonstrated the layout recognition (LR) engine in revision 20601 to eager for the accuracy of the reading order. The bounding box of each paragraph can be obtained correctly by the Foxit LR engine, but the result of reading-order is not always correct for single-column, and two-column layout format due to the table issue, formula issue, and multiple mini separated bounding box and footer issue. Thus, the algorithm is developed to improve the accuracy of the reading order based on the Foxit LR structure. In this paper, a creative observation method (Here called the MESH method) is provided here to open a new chance in the research of the reading-order field. Here two important parameters are introduced, one parameter is the number of the bounding box on the right side of the present bounding box (NRight), and another parameter is the number of the bounding box under the present bounding box (Nunder). And the normalized x-value (x/the whole width), the normalized y-value (y/the whole height) of each bounding box, the x-, and y- position of each bounding box were also put into consideration. Initial experimental results of single column layout format demonstrate a 19.33% absolute improvement in accuracy of the reading-order over 7 PDF files (total 150 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 72%. And for two-column layout format, the preliminary results demonstrate a 44.44% absolute improvement in accuracy of the reading-order over 2 PDF files (total 18 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 0%. Until now, the footer issue and a part of multiple mini separated bounding box issue can be solved by using the MESH method. However, there are still three issues that cannot be solved, such as the table issue, formula issue, and the random multiple mini separated bounding boxes. But the detection of the table position and the recognition of the table structure are out of the scope in this paper, and there is needed another research. In the future, the tasks are chosen- how to detect the table position in the page and to extract the content of the table.

Keywords: document processing, reading order, observation method, layout recognition

Procedia PDF Downloads 154
257 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 201
256 Relationship between Thumb Length and Pointing Performance on Portable Terminal with Touch-Sensitive Screen

Authors: Takahiro Nishimura, Kouki Doi, Hiroshi Fujimoto

Abstract:

Touch-sensitive screens that serve as displays and input devices have been adopted in many portable terminals such as smartphones and personal media players, and the market of touch-sensitive screens has expanded greatly. One of the advantages of touch-sensitive screen is the flexibility in the graphical user interface (GUI) design, and it is imperative to design an appropriate GUI to realize an easy-to-use interface. Moreover, it is important to evaluate the relationship between pointing performance and GUI design. There is much knowledge regarding easy-to-use GUI designs for portable terminals with touch-sensitive screens, and most have focused on GUI design approaches for women or children with small hands. In contrast, GUI design approaches for users with large hands have not received sufficient attention. In this study, to obtain knowledge that contributes to the establishment of individualized easy-to-use GUI design guidelines, we conducted experiments to investigate the relationship between thumb length and pointing performance on portable terminals with touch-sensitive screens. In this study, fourteen college students who participated in the experiment were divided into two groups based on the length of their thumbs. Specifically, we categorized the participants into two groups, thumbs longer than 64.2 mm into L (Long) group, and thumbs longer than 57.4 mm but shorter than 64.2 mm into A (Average) group, based on Japanese anthropometric database. They took part in this study under the authorization of Waseda University’s ‘Ethics Review Committee on Research with Human Subjects’. We created an application for the experimental task and implemented it on the projected capacitive touch-sensitive screen portable terminal (iPod touch (4th generation)). The display size was 3.5 inch and 960 × 640 - pixel resolution at 326 ppi (pixels per inch). This terminal was selected as the experimental device, because of its wide use and market share. The operational procedure of the application is as follows. First, the participants placed their thumb on the start position. Then, one cross-shaped target in a 10 × 7 array of 70 positions appeared at random. The participants pointed the target with their thumb as accurately and as fast as possible. Then, they returned their thumb to the start position and waited. The operation ended when this procedure had been repeated until all 70 targets had each been pointed at once by the participants. We adopted the evaluation indices for absolute error, variable error, and pointing time to investigate pointing performance when using the portable terminal. The results showed that pointing performance varied with thumb length. In particular, on the lower right side of the screen, the performance of L group with long thumb was low. Further, we presented an approach for designing easy-to- use button GUI for users with long thumbs. The contributions of this study include revelation of the relationship between pointing performance and user’s thumb length when using a portable terminal in terms of accuracy, precision, and speed of pointing. We hope that these findings contribute to an easy-to-use GUI design for users with large hands.

Keywords: pointing performance, portable terminal, thumb length, touch-sensitive screen

Procedia PDF Downloads 140
255 Finding the Association Rule between Nursing Interventions and Early Evaluation Results of In-Hospital Cardiac Arrest to Improve Patient Safety

Authors: Wei-Chih Huang, Pei-Lung Chung, Ching-Heng Lin, Hsuan-Chia Yang, Der-Ming Liou

Abstract:

Background: In-Hospital Cardiac Arrest (IHCA) threaten life of the inpatients, cause serious effect to patient safety, quality of inpatients care and hospital service. Health providers must identify the signs of IHCA early to avoid the occurrence of IHCA. This study will consider the potential association between early signs of IHCA and the essence of patient care provided by nurses and other professionals before an IHCA occurs. The aim of this study is to identify significant associations between nursing interventions and abnormal early evaluation results of IHCA that can assist health care providers in monitoring inpatients at risk of IHCA to increase opportunities of IHCA early detection and prevention. Materials and Methods: This study used one of the data mining techniques called association rules mining to compute associations between nursing interventions and abnormal early evaluation results of IHCA. The nursing interventions and abnormal early evaluation results of IHCA were considered to be co-occurring if nursing interventions were provided within 24 hours of last being observed in abnormal early evaluation results of IHCA. The rule based methods were utilized 23.6 million electronic medical records (EMR) from a medical center in Taipei, Taiwan. This dataset includes 733 concepts of nursing interventions that coded by clinical care classification (CCC) codes and 13 early evaluation results of IHCA with binary codes. The values of interestingness and lift were computed as Q values to measure the co-occurrence and associations’ strength between all in-hospital patient care measures and abnormal early evaluation results of IHCA. The associations were evaluated by comparing the results of Q values and verified by medical experts. Results and Conclusions: The results show that there are 4195 pairs of associations between nursing interventions and abnormal early evaluation results of IHCA with their Q values. The indication of positive association is 203 pairs with Q values greater than 5. Inpatients with high blood sugar level (hyperglycemia) have positive association with having heart rate lower than 50 beats per minute or higher than 120 beats per minute, Q value is 6.636. Inpatients with temporary pacemaker (TPM) have significant association with high risk of IHCA, Q value is 47.403. There is significant positive correlation between inpatients with hypovolemia and happened abnormal heart rhythms (arrhythmias), Q value is 127.49. The results of this study can help to prevent IHCA from occurring by making health care providers early recognition of inpatients at risk of IHCA, assist with monitoring patients for providing quality of care to patients, improve IHCA surveillance and quality of in-hospital care.

Keywords: in-hospital cardiac arrest, patient safety, nursing intervention, association rule mining

Procedia PDF Downloads 253
254 Solutions to Reduce CO2 Emissions in Autonomous Robotics

Authors: Antoni Grau, Yolanda Bolea, Alberto Sanfeliu

Abstract:

Mobile robots can be used in many different applications, including mapping, search, rescue, reconnaissance, hazard detection, and carpet cleaning, exploration, etc. However, they are limited due to their reliance on traditional energy sources such as electricity and oil which cannot always provide a convenient energy source in all situations. In an ever more eco-conscious world, solar energy offers the most environmentally clean option of all energy sources. Electricity presents threats of pollution resulting from its production process, and oil poses a huge threat to the environment. Not only does it pose harm by the toxic emissions (for instance CO2 emissions), it produces the combustion process necessary to produce energy, but there is the ever present risk of oil spillages and damages to ecosystems. Solar energy can help to mitigate carbon emissions by replacing more carbon intensive sources of heat and power. The challenge of this work is to propose the design and the implementation of electric battery recharge stations. Those recharge docks are based on the use of renewable energy such as solar energy (with photovoltaic panels) with the object to reduce the CO2 emissions. In this paper, a comparative study of the CO2 emission productions (from the use of different energy sources: natural gas, gas oil, fuel and solar panels) in the charging process of the Segway PT batteries is carried out. To make the study with solar energy, a photovoltaic panel, and a Buck-Boost DC/DC block has been used. Specifically, the STP005S-12/Db solar panel has been used to carry out our experiments. This module is a 5Wp-photovoltaic (PV) module, configured with 36 monocrystalline cells serially connected. With those elements, a battery recharge station is made to recharge the robot batteries. For the energy storage DC/DC block, a series of ultracapacitors have been used. Due to the variation of the PV panel with the temperature and irradiation, and the non-integer behavior of the ultracapacitors as well as the non-linearities of the whole system, authors have been used a fractional control method to achieve that solar panels supply the maximum allowed power to recharge the robots in the lesser time. Greenhouse gas emissions for production of electricity vary due to regional differences in source fuel. The impact of an energy technology on the climate can be characterised by its carbon emission intensity, a measure of the amount of CO2, or CO2 equivalent emitted by unit of energy generated. In our work, the coal is the fossil energy more hazardous, providing a 53% more of gas emissions than natural gas and a 30% more than fuel. Moreover, it is remarkable that existing fossil fuel technologies produce high carbon emission intensity through the combustion of carbon-rich fuels, whilst renewable technologies such as solar produce little or no emissions during operation, but may incur emissions during manufacture. The solar energy thus can help to mitigate carbon emissions.

Keywords: autonomous robots, CO2 emissions, DC/DC buck-boost, solar energy

Procedia PDF Downloads 402
253 Metacognitive Processing in Early Readers: The Role of Metacognition in Monitoring Linguistic and Non-Linguistic Performance and Regulating Students' Learning

Authors: Ioanna Taouki, Marie Lallier, David Soto

Abstract:

Metacognition refers to the capacity to reflect upon our own cognitive processes. Although there is an ongoing discussion in the literature on the role of metacognition in learning and academic achievement, little is known about its neurodevelopmental trajectories in early childhood, when children begin to receive formal education in reading. Here, we evaluate the metacognitive ability, estimated under a recently developed Signal Detection Theory model, of a cohort of children aged between 6 and 7 (N=60), who performed three two-alternative-forced-choice tasks (two linguistic: lexical decision task, visual attention span task, and one non-linguistic: emotion recognition task) including trial-by-trial confidence judgements. Our study has three aims. First, we investigated how metacognitive ability (i.e., how confidence ratings track accuracy in the task) relates to performance in general standardized tasks related to students' reading and general cognitive abilities using Spearman's and Bayesian correlation analysis. Second, we assessed whether or not young children recruit common mechanisms supporting metacognition across the different task domains or whether there is evidence for domain-specific metacognition at this early stage of development. This was done by examining correlations in metacognitive measures across different task domains and evaluating cross-task covariance by applying a hierarchical Bayesian model. Third, using robust linear regression and Bayesian regression models, we assessed whether metacognitive ability in this early stage is related to the longitudinal learning of children in a linguistic and a non-linguistic task. Notably, we did not observe any association between students’ reading skills and metacognitive processing in this early stage of reading acquisition. Some evidence consistent with domain-general metacognition was found, with significant positive correlations between metacognitive efficiency between lexical and emotion recognition tasks and substantial covariance indicated by the Bayesian model. However, no reliable correlations were found between metacognitive performance in the visual attention span and the remaining tasks. Remarkably, metacognitive ability significantly predicted children's learning in linguistic and non-linguistic domains a year later. These results suggest that metacognitive skill may be dissociated to some extent from general (i.e., language and attention) abilities and further stress the importance of creating educational programs that foster students’ metacognitive ability as a tool for long term learning. More research is crucial to understand whether these programs can enhance metacognitive ability as a transferable skill across distinct domains or whether unique domains should be targeted separately.

Keywords: confidence ratings, development, metacognitive efficiency, reading acquisition

Procedia PDF Downloads 123
252 [Keynote Talk]: New Generations and Employment: An Exploratory Study about Tensions between the Psycho-Social Characteristics of the Generation Z and Expectations and Actions of Organizational Structures Related with Employment (CABA, 2016)

Authors: Esteban Maioli

Abstract:

Generational studies have an important research tradition in social and human sciences. On the one hand, the speed of social change in the context of globalization imposes the need to research the transformations are identified both the subjectivity of the agents involved and its inclusion in the institutional matrix, specifically employment. Generation Z, (generally considered as the population group whose birth occurs after 1995) have unique psycho-social characteristics. Gen Z is characterized by a different set of values, beliefs, attitudes and ambitions that impact in their concrete action in organizational structures. On the other hand, managers often have to deal with generational differences in the workplace. Organizations have members who belong to different generations; they had never before faced the challenge of having such a diverse group of members. The members of each historical generation are characterized by a different set of values, beliefs, attitudes and ambitions that are manifest in their concrete action in organizational structures. Gen Z it’s the only one who can fully be considered "global," while its members were born in the consolidated context of globalization. Some salient features of the Generation Z can be summarized as follows. They’re the first fully born into a digital world. Social networks and technology are integrated into their lives. They are concerned about the challenges of the modern world (poverty, inequality, climate change, among others). They are self-expressive, more liberal and open to change. They often bore easily, with short attention spans. They do not like routine tasks. They want to achieve a good life-work balance, and they are interested in a flexible work environment, as opposed to traditional work schedule. They are critical thinkers, who come with innovative and creative ideas to help. Research design considered methodological triangulation. Data was collected with two techniques: a self-administered survey with multiple choice questions and attitudinal scales applied over a non-probabilistic sample by reasoned decision. According to the multi-method strategy, also it was conducted in-depth interviews. Organizations constantly face new challenges. One of the biggest ones is to learn to manage a multi-generational scope of work. While Gen Z has not yet been fully incorporated (expected to do so in five years or so), many organizations have already begun to implement a series of changes in its recruitment and development. The main obstacle to retaining young talent is the gap between the expectations of iGen applicants and what companies offer. Members of the iGen expect not only a good salary and job stability but also a clear career plan. Generation Z needs to have immediate feedback on their tasks. However, many organizations have yet to improve both motivation and monitoring practices. It is essential for companies to take a review of organizational practices anchored in the culture of the organization.

Keywords: employment, expectations, generation Z, organizational culture, organizations, psycho-social characteristics

Procedia PDF Downloads 186
251 Improvement of Oxidative Stability of Edible Oil by Microencapsulation Using Plant Proteins

Authors: L. Le Priol, A. Nesterenko, K. El Kirat, K. Saleh

Abstract:

Introduction and objectives: Polyunsaturated fatty acids (PUFAs) omega-3 and omega-6 are widely recognized as being beneficial to the health and normal growth. Unfortunately, due to their highly unsaturated nature, these molecules are sensitive to oxidation and thermic degradation leading to the production of toxic compounds and unpleasant flavors and smells. Hence, it is necessary to find out a suitable way to protect them. Microencapsulation by spray-drying is a low-cost encapsulation technology and most commonly used in the food industry. Many compounds can be used as wall materials, but there is a growing interest in the use of biopolymers, such as proteins and polysaccharides, over the last years. The objective of this study is to increase the oxidative stability of sunflower oil by microencapsulation in plant protein matrices using spray-drying technique. Material and methods: Sunflower oil was used as a model substance for oxidable food oils. Proteins from brown rice, hemp, pea, soy and sunflower seeds were used as emulsifiers and microencapsulation wall materials. First, the proteins were solubilized in distilled water. Then, the emulsions were pre-homogenized using a high-speed homogenizer (Ultra-Turrax) and stabilized by using a high-pressure homogenizer (HHP). Drying of the emulsion was performed in a Mini Spray Dryer. The oxidative stability of the encapsulated oil was determined by performing accelerated oxidation tests with a Rancimat. The size of the microparticles was measured using a laser diffraction analyzer. The morphology of the spray-dried microparticles was acquired using environmental scanning microscopy. Results: Pure sunflower oil was used as a reference material. Its induction time was 9.5 ± 0.1 h. The microencapsulation of sunflower oil in pea and soy protein matrices significantly improved its oxidative stability with induction times of 21.3 ± 0.4 h and 12.5 ± 0.4 h respectively. The encapsulation with hemp proteins did not significantly change the oxidative stability of the encapsulated oil. Sunflower and brown rice proteins were ineffective materials for this application, with induction times of 7.2 ± 0.2 h and 7.0 ± 0.1 h respectively. The volume mean diameter of the microparticles formulated with soy and pea proteins were 8.9 ± 0.1 µm and 16.3 ± 1.2 µm respectively. The values for hemp, sunflower and brown rice proteins could not be obtained due to the agglomeration of the microparticles. ESEM images showed smooth and round microparticles with soy and pea proteins. The surfaces of the microparticles obtained with sunflower and hemp proteins were porous. The surface was rough when brown rice proteins were used as the encapsulating agent. Conclusion: Soy and pea proteins appeared to be efficient wall materials for the microencapsulation of sunflower oil by spray drying. These results were partly explained by the higher solubility of soy and pea proteins in water compared to hemp, sunflower, and brown rice proteins. Acknowledgment: This work has been performed, in partnership with the SAS PIVERT, within the frame of the French Institute for the Energy Transition (Institut pour la Transition Energétique (ITE)) P.I.V.E.R.T. (www.institut-pivert.com) selected as an Investments for the Future (Investissements d’Avenir). This work was supported, as part of the Investments for the Future, by the French Government under the reference ANR-001-01.

Keywords: biopolymer, edible oil, microencapsulation, oxidative stability, release, spray-drying

Procedia PDF Downloads 116
250 Neuropsychological Aspects in Adolescents Victims of Sexual Violence with Post-Traumatic Stress Disorder

Authors: Fernanda Mary R. G. Da Silva, Adriana C. F. Mozzambani, Marcelo F. Mello

Abstract:

Introduction: Sexual assault against children and adolescents is a public health problem with serious consequences on their quality of life, especially for those who develop post-traumatic stress disorder (PTSD). The broad literature in this research area points to greater losses in verbal learning, explicit memory, speed of information processing, attention and executive functioning in PTSD. Objective: To compare the neuropsychological functions of adolescents from 14 to 17 years of age, victims of sexual violence with PTSD with those of healthy controls. Methodology: Application of a neuropsychological battery composed of the following subtests: WASI vocabulary and matrix reasoning; Digit subtests (WISC-IV); verbal auditory learning test RAVLT; Spatial Span subtest of the WMS - III scale; abbreviated version of the Wisconsin test; concentrated attention test - D2; prospective memory subtest of the NEUPSILIN scale; five-digit test - FDT and the Stroop test (Trenerry version) in adolescents with a history of sexual violence in the previous six months, referred to the Prove (Violence Care and Research Program of the Federal University of São Paulo), for further treatment. Results: The results showed a deficit in the word coding process in the RAVLT test, with impairment in A3 (p = 0.004) and A4 (p = 0.016) measures, which compromises the verbal learning process (p = 0.010) and the verbal recognition memory (p = 0.012), seeming to present a worse performance in the acquisition of verbal information that depends on the support of the attentional system. A worse performance was found in list B (p = 0.047), a lower priming effect p = 0.026, that is, lower evocation index of the initial words presented and less perseveration (p = 0.002), repeated words. Therefore, there seems to be a failure in the creation of strategies that help the mnemonic process of retention of the verbal information necessary for learning. Sustained attention was found to be impaired, with greater loss of setting in the Wisconsin test (p = 0.023), a lower rate of correct responses in stage C of the Stroop test (p = 0.023) and, consequently, a higher index of erroneous responses in C of the Stroop test (p = 0.023), besides more type II errors in the D2 test (p = 0.008). A higher incidence of total errors was observed in the reading stage of the FDT test p = 0.002, which suggests fatigue in the execution of the task. Performance is compromised in executive functions in the cognitive flexibility ability, suggesting a higher index of total errors in the alternating step of the FDT test (p = 0.009), as well as a greater number of persevering errors in the Wisconsin test (p = 0.004). Conclusion: The data from this study suggest that sexual violence and PTSD cause significant impairment in the neuropsychological functions of adolescents, evidencing risk to quality of life in stages that are fundamental for the development of learning and cognition.

Keywords: adolescents, neuropsychological functions, PTSD, sexual violence

Procedia PDF Downloads 109
249 Acrylamide Concentration in Cakes with Different Caloric Sweeteners

Authors: L. García, N. Cobas, M. López

Abstract:

Acrylamide, a probable carcinogen, is formed in high-temperature processed food (>120ºC) when the free amino acid asparagine reacts with reducing sugars, mainly glucose and fructose. Cane juices' repeated heating would potentially form acrylamide during brown sugar production. This study aims to determine if using panela in yogurt cake preparation increases acrylamide formation. A secondary aim is to analyze the acrylamide concentration in four cake confections with different caloric sweetener ingredients: beet sugar (BS), cane sugar (CS), panela (P), and a panela and chocolate mix (PC). The doughs were obtained by combining ingredients in a planetary mixer. A model system made up of flour (25%), caloric sweeteners (25 %), eggs (23%), yogurt (15.7%), sunflower oil (9.4%), and brewer's yeast (2 %) was applied to BS, CS and P cakes. The ingredients of PC cakes varied: flour (21.5 %), panela chocolate (21.5 %), eggs (25.9 %), yogurt (18 %), sunflower oil (10.8 %), and brewer’s yeast (2.3 %). The preparations were baked for 45' at 180 ºC. Moisture was estimated by AOAC. Protein was determined by the Kjeldahl method. Ash percentage was calculated by weight loss after pyrolysis (≈ 600 °C). Fat content was measured using liquid-solid extraction in hydrolyzed raw ingredients and final confections. Carbohydrates were determined by difference and total sugars by the Luff-Schoorl method, based on the iodometric determination of copper ions. Finally, acrylamide content was determined by LC-MS by the isocratic system (phase A: 97.5 % water with 0.1% formic acid; phase B: 2.5 % methanol), using a standard internal procedure. Statistical analysis was performed using SPSS v.23. One-way variance analysis determined differences between acrylamide content and compositional analysis, with caloric sweeteners as fixed effect. Significance levels were determined by applying Duncan's t-test (p<0.05). P cakes showed a lower energy value than the other baked products; sugar content was similar to BS and CS, with 6.1 % mean crude protein. Acrylamide content in caloric sweeteners was similar to previously reported values. However, P and PC showed significantly higher concentrations, probably explained by the applied procedure. Acrylamide formation depends on both reducing sugars and asparagine concentration and availability. Beet sugar samples did not present acrylamide concentrations within the detection and quantification limit. However, the highest acrylamide content was measured in the BS. This may be due to the higher concentration of reducing sugars and asparagine in other raw ingredients. The cakes made with panela, cane sugar, or panela with chocolate did not differ in acrylamide content. The lack of asparagine measures constitutes a limitation. Cakes made with panela showed lower acrylamide formation than products elaborated with beet or cane sugar.

Keywords: beet sugar, cane sugar, panela, yogurt cake

Procedia PDF Downloads 48
248 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 226
247 The Impact of Using Flattening Filter-Free Energies on Treatment Efficiency for Prostate SBRT

Authors: T. Al-Alawi, N. Shorbaji, E. Rashaidi, M.Alidrisi

Abstract:

Purpose/Objective(s): The main purpose of this study is to analyze the planning of SBRT treatments for localized prostate cancer with 6FFF and 10FFF energies to see if there is a dosimetric difference between the two energies and how we can increase the plan efficiency and reduce its complexity. Also, to introduce a planning method in our department to treat prostate cancer by utilizing high energy photons without increasing patient toxicity and fulfilled all dosimetric constraints for OAR (an organ at risk). Then toevaluate the target 95% coverage PTV95, V5%, V2%, V1%, low dose volume for OAR (V1Gy, V2Gy, V5Gy), monitor unit (beam-on time), and estimate the values of homogeneity index HI, conformity index CI a Gradient index GI for each treatment plan.Materials/Methods: Two treatment plans were generated for15 patients with localized prostate cancer retrospectively using the CT planning image acquired for radiotherapy purposes. Each plan contains two/three complete arcs with two/three different collimator angle sets. The maximum dose rate available is 1400MU/min for the energy 6FFF and 2400MU/min for 10FFF. So in case, we need to avoid changing the gantry speed during the rotation, we tend to use the third arc in the plan with 6FFF to accommodate the high dose per fraction. The clinical target volume (CTV) consists of the entire prostate for organ-confined disease. The planning target volume (PTV) involves a margin of 5 mm. A 3-mm margin is favored posteriorly. Organs at risk identified and contoured include the rectum, bladder, penile bulb, femoral heads, and small bowel. The prescription dose is to deliver 35Gyin five fractions to the PTV and apply constraints for organ at risk (OAR) derived from those reported in references. Results: In terms of CI=0.99, HI=0.7, and GI= 4.1, it was observed that they are all thesame for both energies 6FFF and 10FFF with no differences, but the total delivered MUs are much less for the 10FFF plans (2907 for 6FFF vs.2468 for 10FFF) and the total delivery time is 124Sc for 6FFF vs. 61Sc for 10FFF beams. There were no dosimetric differences between 6FFF and 10FFF in terms of PTV coverage and mean doses; the mean doses for the bladder, rectum, femoral heads, penile bulb, and small bowel were collected, and they were in favor of the 10FFF. Also, we got lower V1Gy, V2Gy, and V5Gy doses for all OAR with 10FFF plans. Integral dosesID in (Gy. L) were recorded for all OAR, and they were lower with the 10FFF plans. Conclusion: High energy 10FFF has lower treatment time and lower delivered MUs; also, 10FFF showed lower integral and meant doses to organs at risk. In this study, we suggest usinga 10FFF beam for SBRTprostate treatment, which has the advantage of lowering the treatment time and that lead to lessplan complexity with respect to 6FFF beams.

Keywords: FFF beam, SBRT prostate, VMAT, prostate cancer

Procedia PDF Downloads 59
246 Monitoring of Wound Healing Through Structural and Functional Mechanisms Using Photoacoustic Imaging Modality

Authors: Souradip Paul, Arijit Paramanick, M. Suheshkumar Singh

Abstract:

Traumatic injury is the leading worldwide health problem. Annually, millions of surgical wounds are created for the sake of routine medical care. The healing of these unintended injuries is always monitored based on visual inspection. The maximal restoration of tissue functionality remains a significant concern of clinical care. Although minor injuries heal well with proper care and medical treatment, large injuries negatively influence various factors (vasculature insufficiency, tissue coagulation) and cause poor healing. Demographically, the number of people suffering from severe wounds and impaired healing conditions is burdensome for both human health and the economy. An incomplete understanding of the functional and molecular mechanism of tissue healing often leads to a lack of proper therapies and treatment. Hence, strong and promising medical guidance is necessary for monitoring the tissue regeneration processes. Photoacoustic imaging (PAI), is a non-invasive, hybrid imaging modality that can provide a suitable solution in this regard. Light combined with sound offers structural, functional and molecular information from the higher penetration depth. Therefore, molecular and structural mechanisms of tissue repair will be readily observable in PAI from the superficial layer and in the deep tissue region. Blood vessel formation and its growth is an essential tissue-repairing components. These vessels supply nutrition and oxygen to the cell in the wound region. Angiogenesis (formation of new capillaries from existing blood vessels) contributes to new blood vessel formation during tissue repair. The betterment of tissue healing directly depends on angiogenesis. Other optical microscopy techniques can visualize angiogenesis in micron-scale penetration depth but are unable to provide deep tissue information. PAI overcomes this barrier due to its unique capability. It is ideally suited for deep tissue imaging and provides the rich optical contrast generated by hemoglobin in blood vessels. Hence, an early angiogenesis detection method provided by PAI leads to monitoring the medical treatment of the wound. Along with functional property, mechanical property also plays a key role in tissue regeneration. The wound heals through a dynamic series of physiological events like coagulation, granulation tissue formation, and extracellular matrix (ECM) remodeling. Therefore tissue elasticity changes, can be identified using non-contact photoacoustic elastography (PAE). In a nutshell, angiogenesis and biomechanical properties are both critical parameters for tissue healing and these can be characterized in a single imaging modality (PAI).

Keywords: PAT, wound healing, tissue coagulation, angiogenesis

Procedia PDF Downloads 79
245 Impact of Traffic Restrictions due to Covid19, on Emissions from Freight Transport in Mexico City

Authors: Oscar Nieto-Garzón, Angélica Lozano

Abstract:

In urban areas, on-road freight transportation creates several social and environmental externalities. Then, it is crucial that freight transport considers not only economic aspects, like retailer distribution cost reduction and service improvement, but also environmental effects such as global CO2 and local emissions (e.g. Particulate Matter, NOX, CO) and noise. Inadequate infrastructure development, high rate of urbanization, the increase of motorization, and the lack of transportation planning are characteristics that urban areas from developing countries share. The Metropolitan Area of Mexico City (MAMC), the Metropolitan Area of São Paulo (MASP), and Bogota are three of the largest urban areas in Latin America where air pollution is often a problem associated with emissions from mobile sources. The effect of the lockdown due to COVID-19 was analyzedfor these urban areas, comparing the same period (January to August) of years 2016 – 2019 with 2020. A strong reduction in the concentration of primary criteria pollutants emitted by road traffic were observed at the beginning of 2020 and after the lockdown measures.Daily mean concentration of NOx decreased 40% in the MAMC, 34% in the MASP, and 62% in Bogota. Daily mean ozone levels increased after the lockdown measures in the three urban areas, 25% in MAMC, 30% in the MASP and 60% in Bogota. These changes in emission patterns from mobile sources drastically changed the ambient atmospheric concentrations of CO and NOX. The CO/NOX ratioat the morning hours is often used as an indicator of mobile sources emissions. In 2020, traffic from cars and light vehicles was significantly reduced due to the first lockdown, but buses and trucks had not restrictions. In theory, it implies a decrease in CO and NOX from cars or light vehicles, maintaining the levels of NOX by trucks(or lower levels due to the congestion reduction). At rush hours, traffic was reduced between 50% and 75%, so trucks could get higher speeds, which would reduce their emissions. By means an emission model, it was found that an increase in the average speed (75%) would reduce the emissions (CO, NOX, and PM) from diesel trucks by up to 30%. It was expected that the value of CO/NOXratio could change due to thelockdownrestrictions. However, although there was asignificant reduction of traffic, CO/NOX kept its trend, decreasing to 8-9 in 2020. Hence, traffic restrictions had no impact on the CO/NOX ratio, although they did reduce vehicle emissions of CO and NOX. Therefore, these emissions may not adequately represent the change in the vehicle emission patterns, or this ratio may not be a good indicator of emissions generated by vehicles. From the comparison of the theoretical data and those observed during the lockdown, results that the real NOX reduction was lower than the theoretical reduction. The reasons could be that there are other sources of NOX emissions, so there would be an over-representation of NOX emissions generated by diesel vehicles, or there is an underestimation of CO emissions. Further analysis needs to consider this ratioto evaluate the emission inventories and then to extend these results forthe determination of emission control policies to non-mobile sources.

Keywords: COVID-19, emissions, freight transport, latin American metropolis

Procedia PDF Downloads 113
244 Material Handling Equipment Selection Using Fuzzy AHP Approach

Authors: Priyanka Verma, Vijaya Dixit, Rishabh Bajpai

Abstract:

This research paper is aimed at selecting appropriate material handling equipment among the given choices so that the automation level in material handling can be enhanced. This work is a practical case scenario of material handling systems in consumer electronic appliances manufacturing organization. The choices of material handling equipment among which the decision has to be made are Automated Guided Vehicle’s (AGV), Autonomous Mobile Robots (AMR), Overhead Conveyer’s (OC) and Battery Operated Trucks/Vehicle’s (BOT). There is a need of attaining a certain level of automation in order to reduce human interventions in the organization. This requirement of achieving certain degree of automation can be attained by material handling equipment’s mentioned above. The main motive for selecting above equipment’s for study was solely based on corporate financial strategy of investment and return obtained through that investment made in stipulated time framework. Since the low cost automation with respect to material handling devices has to be achieved hence these equipment’s were selected. Investment to be done on each unit of this equipment is less than 20 lakh rupees (INR) and the recovery period is less than that of five years. Fuzzy analytic hierarchic process (FAHP) is applied here for selecting equipment where the four choices are evaluated on basis of four major criteria’s and 13 sub criteria’s, and are prioritized on the basis of weight obtained. The FAHP used here make use of triangular fuzzy numbers (TFN). The inability of the traditional AHP in order to deal with the subjectiveness and impreciseness in the pair-wise comparison process has been improved in the FAHP. The range of values for general rating purposes for all decision making parameters is kept between 0 and 1 on the basis of expert opinions captured on shop floor. These experts were familiar with operating environment and shop floor activity control. Instead of generating exact value the FAHP generates the ranges of values to accommodate the uncertainty in decision-making process. The four major criteria’s selected for the evaluation of choices of material handling equipment’s available are materials, technical capabilities, cost and other features. The thirteen sub criteria’s listed under these following four major criteria’s are weighing capacity, load per hour, material compatibility, capital cost, operating cost and maintenance cost, speed, distance moved, space required, frequency of trips, control required, safety and reliability issues. The key finding shows that among the four major criteria selected, cost is emerged as the most important criteria and is one of the key decision making aspect on the basis of which material equipment selection is based on. While further evaluating the choices of equipment available for each sub criteria it is found that AGV scores the highest weight in most of the sub-criteria’s. On carrying out complete analysis the research shows that AGV is the best material handling equipment suiting all decision criteria’s selected in FAHP and therefore it is beneficial for the organization to carry out automated material handling in the facility using AGV’s.

Keywords: fuzzy analytic hierarchy process (FAHP), material handling equipment, subjectiveness, triangular fuzzy number (TFN)

Procedia PDF Downloads 413
243 Fabrication of SnO₂ Nanotube Arrays for Enhanced Gas Sensing Properties

Authors: Hsyi-En Cheng, Ying-Yi Liou

Abstract:

Metal-oxide semiconductor (MOS) gas sensors are widely used in the gas-detection market due to their high sensitivity, fast response, and simple device structures. However, the high working temperature of MOS gas sensors makes them difficult to integrate with the appliance or consumer goods. One-dimensional (1-D) nanostructures are considered to have the potential to lower their working temperature due to their large surface-to-volume ratio, confined electrical conduction channels, and small feature sizes. Unfortunately, the difficulty of fabricating 1-D nanostructure electrodes has hindered the development of low-temperature MOS gas sensors. In this work, we proposed a method to fabricate nanotube-arrays, and the SnO₂ nanotube-array sensors with different wall thickness were successfully prepared and examined. The fabrication of SnO₂ nanotube arrays incorporates the techniques of barrier-free anodic aluminum oxide (AAO) template and atomic layer deposition (ALD) of SnO₂. First, 1.0 µm Al film was deposited on ITO glass substrate by electron beam evaporation and then anodically oxidized by five wt% phosphoric acid solution at 5°C under a constant voltage of 100 V to form porous aluminum oxide. As the Al film was fully oxidized, a 15 min over anodization and a 30 min post chemical dissolution were used to remove the barrier oxide at the bottom end of pores to generate a barrier-free AAO template. The ALD using reactants of TiCl4 and H₂O was followed to grow a thin layer of SnO₂ on the template to form SnO₂ nanotube arrays. After removing the surface layer of SnO₂ by H₂ plasma and dissolving the template by 5 wt% phosphoric acid solution at 50°C, upright standing SnO₂ nanotube arrays on ITO glass were produced. Finally, Ag top electrode with line width of 5 μm was printed on the nanotube arrays to form SnO₂ nanotube-array sensor. Two SnO₂ nanotube-arrays with wall thickness of 30 and 60 nm were produced in this experiment for the evaluation of gas sensing ability. The flat SnO₂ films with thickness of 30 and 60 nm were also examined for comparison. The results show that the properties of ALD SnO₂ films were related to the deposition temperature. The films grown at 350°C had a low electrical resistivity of 3.6×10-3 Ω-cm and were, therefore, used for the nanotube-array sensors. The carrier concentration and mobility of the SnO₂ films were characterized by Ecopia HMS-3000 Hall-effect measurement system and were 1.1×1020 cm-3 and 16 cm3/V-s, respectively. The electrical resistance of SnO₂ film and nanotube-array sensors in air and in a 5% H₂-95% N₂ mixture gas was monitored by Pico text M3510A 6 1/2 Digits Multimeter. It was found that, at 200 °C, the 30-nm-wall SnO₂ nanotube-array sensor performs the highest responsivity to 5% H₂, followed by the 30-nm SnO₂ film sensor, the 60-nm SnO₂ film sensor, and the 60-nm-wall SnO₂ nanotube-array sensor. However, at temperatures below 100°C, all the samples were insensitive to the 5% H₂ gas. Further investigation on the sensors with thinner SnO₂ is necessary for improving the sensing ability at temperatures below 100 °C.

Keywords: atomic layer deposition, nanotube arrays, gas sensor, tin dioxide

Procedia PDF Downloads 220
242 Geometric Optimisation of Piezoelectric Fan Arrays for Low Energy Cooling

Authors: Alastair Hales, Xi Jiang

Abstract:

Numerical methods are used to evaluate the operation of confined face-to-face piezoelectric fan arrays as pitch, P, between the blades is varied. Both in-phase and counter-phase oscillation are considered. A piezoelectric fan consists of a fan blade, which is clamped at one end, and an extremely low powered actuator. This drives the blade tip’s oscillation at its first natural frequency. Sufficient blade tip speed, created by the high oscillation frequency and amplitude, is required to induce vortices and downstream volume flow in the surrounding air. A single piezoelectric fan may provide the ideal solution for low powered hot spot cooling in an electronic device, but is unable to induce sufficient downstream airflow to replace a conventional air mover, such as a convection fan, in power electronics. Piezoelectric fan arrays, which are assemblies including multiple fan blades usually in face-to-face orientation, must be developed to widen the field of feasible applications for the technology. The potential energy saving is significant, with a 50% power demand reduction compared to convection fans even in an unoptimised state. A numerical model of a typical piezoelectric fan blade is derived and validated against experimental data. Numerical error is found to be 5.4% and 9.8% using two data comparison methods. The model is used to explore the variation of pitch as a function of amplitude, A, for a confined two-blade piezoelectric fan array in face-to-face orientation, with the blades oscillating both in-phase and counter-phase. It has been reported that in-phase oscillation is optimal for generating maximum downstream velocity and flow rate in unconfined conditions, due at least in part to the beneficial coupling between the adjacent blades that leads to an increased oscillation amplitude. The present model demonstrates that confinement has a significant detrimental effect on in-phase oscillation. Even at low pitch, counter-phase oscillation produces enhanced downstream air velocities and flow rates. Downstream air velocity from counter-phase oscillation can be maximally enhanced, relative to that generated from a single blade, by 17.7% at P = 8A. Flow rate enhancement at the same pitch is found to be 18.6%. By comparison, in-phase oscillation at the same pitch outputs 23.9% and 24.8% reductions in peak downstream air velocity and flow rate, relative to that generated from a single blade. This optimal pitch, equivalent to those reported in the literature, suggests that counter-phase oscillation is less affected by confinement. The optimal pitch for generating bulk airflow from counter-phase oscillation is large, P > 16A, due to the small but significant downstream velocity across the span between adjacent blades. However, by considering design in a confined space, counterphase pitch should be minimised to maximise the bulk airflow generated from a certain cross-sectional area within a channel flow application. Quantitative values are found to deviate to a small degree as other geometric and operational parameters are varied, but the established relationships are maintained.

Keywords: piezoelectric fans, low energy cooling, power electronics, computational fluid dynamics

Procedia PDF Downloads 196
241 Analysis of Differentially Expressed Genes in Spontaneously Occurring Canine Melanoma

Authors: Simona Perga, Chiara Beltramo, Floriana Fruscione, Isabella Martini, Federica Cavallo, Federica Riccardo, Paolo Buracco, Selina Iussich, Elisabetta Razzuoli, Katia Varello, Lorella Maniscalco, Elena Bozzetta, Angelo Ferrari, Paola Modesto

Abstract:

Introduction: Human and canine melanoma have common clinical, histologic characteristics making dogs a good model for comparative oncology. The identification of specific genes and a better understanding of the genetic landscape, signaling pathways, and tumor–microenvironmental interactions involved in the cancer onset and progression is essential for the development of therapeutic strategies against this tumor in both species. In the present study, the differential expression of genes in spontaneously occurring canine melanoma and in paired normal tissue was investigated by targeted RNAseq. Material and Methods: Total RNA was extracted from 17 canine malignant melanoma (CMM) samples and from five paired normal tissues stored in RNA-later. In order to capture the greater genetic variability, gene expression analysis was carried out using two panels (Qiagen): Human Immuno-Oncology (HIO) and Mouse-Immuno-Oncology (MIO) and the miSeq platform (Illumina). These kits allow the detection of the expression profile of 990 genes involved in the immune response against tumors in humans and mice. The data were analyzed through the CLCbio Genomics Workbench (Qiagen) software using the Canis lupus familiaris genome as a reference. Data analysis were carried out both comparing the biologic group (tumoral vs. healthy tissues) and comparing neoplastic tissue vs. paired healthy tissue; a Fold Change greater than two and a p-value less than 0.05 were set as the threshold to select interesting genes. Results and Discussion: Using HIO 63, down-regulated genes were detected; 13 of those were also down-regulated comparing neoplastic sample vs. paired healthy tissue. Eighteen genes were up-regulated, 14 of those were also down-regulated comparing neoplastic sample vs. paired healthy tissue. Using the MIO, 35 down regulated-genes were detected; only four of these were down-regulated, also comparing neoplastic sample vs. paired healthy tissue. Twelve genes were up-regulated in both types of analysis. Considering the two kits, the greatest variation in Fold Change was in up-regulated genes. Dogs displayed a greater genetic homology with humans than mice; moreover, the results have shown that the two kits are able to detect different genes. Most of these genes have specific cellular functions or belong to some enzymatic categories; some have already been described to be correlated to human melanoma and confirm the validity of the dog as a model for the study of molecular aspects of human melanoma.

Keywords: animal model, canine melanoma, gene expression, spontaneous tumors, targeted RNAseq

Procedia PDF Downloads 173
240 Saco Sweet Cherry: Phenolic Profile and Biological Activity of Coloured and Non-Coloured Fractions

Authors: Catarina Bento, Ana Carolina Gonçalves, Fábio Jesus, Luís Rodrigues Silva

Abstract:

Increasing evidence suggests that a diet rich in fruits and vegetables plays important roles in the prevention of chronic diseases, such as heart disease, cancer, stroke, diabetes, Alzheimer’s disease, among others. Fruits and vegetables gained prominence due their richness in bioactive compounds, being the focus of many studies due to their biological properties acting as health promoters. Prunus avium Linnaeus (L.), commonly known as sweet cherry has been the centre of attention due to its health benefits, and has been highly studied. In Portugal, most of the cherry production comes from the Fundão region. The Saco is one of the most important cultivar produced in this region, attributed with geographical protection. In this work, we prepared 3 extracts through solid-phase extraction (SPE): a whole extract, fraction I (non-coloured phenolics) and fraction II (coloured phenolics). The three extracts were used to determine the phenolic profile of Saco cultivar by liquid chromatography with diode array detection (LC-DAD) technique. This was followed by the evaluation of their biological potential, testing the extracts’ capacity to scavenge free-radicals (DPPH•, nitric oxide (•NO) and superoxide radical (O2●-)) and to inhibit α-glucosidase enzyme of all extracts. Additionally, we evaluated, for the first time, the protective effects against peroxyl radical (ROO•)-induced hemoglobin oxidation and hemolysis in human erythrocytes. A total of 16 non-coloured phenolics were detected, 3-O-caffeoylquinic and ρ-coumaroylquinic acids were the main ones, and 6 anthocyanins were found, among which cyanidin-3-O-rutinoside represented the majority. In respect to antioxidant activity, Saco showed great antioxidant potential in a concentration-dependent manner, demonstrated through the DPPH•,•NO and O2●-radicals, and greater ability to inhibit the α-glucosidase enzyme in comparison to the regular drug acarbose used to treat diabetes. Additionally, Saco proved to be effective to protect erythrocytes against oxidative damage in a concentration-dependent manner against hemoglobin oxidation and hemolysis. Our work demonstrated that Saco cultivar is an excellent source of phenolic compounds which are natural antioxidants that easily capture reactive species, such as ROO• before they can attack the erythrocytes’ membrane. In a general way, the whole extract showed the best efficiency, most likely due to a synergetic interaction between the different compounds. Finally, comparing the two separate fractions, the coloured fraction showed the most activity in all the assays, proving to be the biggest contributor of Saco cherries’ biological activity.

Keywords: biological potential, coloured phenolics, non-coloured phenolics, sweet cherry

Procedia PDF Downloads 225
239 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images

Authors: Ravija Gunawardana, Banuka Athuraliya

Abstract:

Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.

Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine

Procedia PDF Downloads 112
238 Aquatic Sediment and Honey of Apis mellifera as Bioindicators of Pesticide Residues

Authors: Luana Guerra, Silvio C. Sampaio, Vladimir Pavan Margarido, Ralpho R. Reis

Abstract:

Brazil is the world's largest consumer of pesticides. The excessive use of these compounds has negative impacts on animal and human life, the environment, and food security. Bees, crucial for pollination, are exposed to pesticides during the collection of nectar and pollen, posing risks to their health and the food chain, including honey contamination. Aquatic sediments are also affected, impacting water quality and the microbiota. Therefore, the analysis of aquatic sediments and bee honey is essential to identify environmental contamination and monitor ecosystems. The aim of this study was to use samples of honey from honeybees (Apis mellifera) and aquatic sediment as bioindicators of environmental contamination by pesticides and their relationship with agricultural use in the surrounding areas. The sample collections of sediment and honey were carried out in two stages. The first stage was conducted in the Bituruna municipality region in the second half of the year 2022, and the second stage took place in the regions of Laranjeiras do Sul, Quedas do Iguaçu, and Nova Laranjeiras in the first half of the year 2023. In total, 10 collection points were selected, with 5 points in the first stage and 5 points in the second stage, where one sediment sample and one honey sample were collected for each point, totaling 20 samples. The honey and sediment samples were analyzed at the Laboratory of the Paraná Institute of Technology, with ten samples of honey and ten samples of sediment. The selected extraction method was QuEChERS, and the analysis of the components present in the sample was performed using liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). The pesticides Azoxystrobin, Epoxiconazole, Boscalid, Carbendazim, Haloxifope, Fomesafen, Fipronil, Chlorantraniliprole, Imidacloprid, and Bifenthrin were detected in the sediment samples from the study area in Laranjeiras do Sul, Paraná, with Carbendazim being the compound with the highest concentration (0.47 mg/kg). The honey samples obtained from the apiaries showed satisfactory results, as they did not show any detection or quantification of the analyzed pesticides, except for Point 9, which had the fungicide tebuconazole but with a concentration Keywords: contamination, water research, agrochemicals, beekeeping activity

Procedia PDF Downloads 19
237 Study on Changes of Land Use impacting the Process of Urbanization, by Using Landsat Data in African Regions: A Case Study in Kigali, Rwanda

Authors: Delphine Mukaneza, Lin Qiao, Wang Pengxin, Li Yan, Chen Yingyi

Abstract:

Human activities on land use make the land-cover gradually change or transit. In this study, we examined the use of Landsat TM data to detect the land use change of Kigali between 1987 and 2009 using remote sensing techniques and analysis of data using ENVI and ArcGIS, a GIS software. Six different categories of land use were distinguished: bare soil, built up land, wetland, water, vegetation, and others. With remote sensing techniques, we analyzed land use data in 1987, 1999 and 2009, changed areas were found and a dynamic situation of land use in Kigali city was found during the 22 years studied. According to relevant Landsat data, the research focused on land use change in accordance with the role of remote sensing in the process of urbanization. The result of the work has shown the rapid increase of built up land between 1987 and 1999 and a big decrease of vegetation caused by the rebuild of the city after the 1994 genocide, while in the period of 1999 to 2009 there was a reduction in built up land and vegetation, after the authority of Kigali city established, a Master Plan where all constructions which were not in the range of the master Plan were destroyed. Rwanda's capital, Kigali City, through the expansion of the urban area, it is increasing the internal employment rate and attracts business investors and the service sector to improve their economy, which will increase the population growth and provide a better life. The overall planning of the city of Kigali considers the environment, land use, infrastructure, cultural and socio-economic factors, the economic development and population forecast, urban development, and constraints specification. To achieve the above purpose, the Government has set for the overall planning of city Kigali, different stages of the detailed description of the design, strategy and action plan that would guide Kigali planners and members of the public in the future to have more detailed regional plans and practical measures. Thus, land use change is significantly the performance of Kigali active human area, which plays an important role for the country to take certain decisions. Another area to take into account is the natural situation of Kigali city. Agriculture in the region does not occupy a dominant position, and with the population growth and socio-economic development, the construction area will gradually rise and speed up the process of urbanization. Thus, as a developing country, Rwanda's population continues to grow and there is low rate of utilization of land, where urbanization remains low. As mentioned earlier, the 1994 genocide massacres, population growth and urbanization processes, have been the factors driving the dramatic changes in land use. The focus on further research would be on analysis of Rwanda’s natural resources, social and economic factors that could be, the driving force of land use change.

Keywords: land use change, urbanization, Kigali City, Landsat

Procedia PDF Downloads 287
236 Electrochemical Performance of Femtosecond Laser Structured Commercial Solid Oxide Fuel Cells Electrolyte

Authors: Mohamed A. Baba, Gazy Rodowan, Brigita Abakevičienė, Sigitas Tamulevičius, Bartlomiej Lemieszek, Sebastian Molin, Tomas Tamulevičius

Abstract:

Solid oxide fuel cells (SOFC) efficiently convert hydrogen to energy without producing any disturbances or contaminants. The core of the cell is electrolyte. For improving the performance of electrolyte-supported cells, it is desirable to extend the available exchange surface area by micro-structuring of the electrolyte with laser-based micromachining. This study investigated the electrochemical performance of cells micro machined using a femtosecond laser. Commercial ceramic SOFC (Elcogen, AS) with a total thickness of 400 μm was structured by 1030 nm wavelength Yb: KGW fs-laser Pharos (Light Conversion) using 100 kHz repetition frequency and 290 fs pulse length light by scanning with the galvanometer scanner (ScanLab) and focused with a f-Theta telecentric lens (SillOptics). The sample height was positioned using a motorized z-stage. The microstructures were formed using a laser spiral trepanning in Ni/YSZ anode supported membrane at the central part of the ceramic piece of 5.5 mm diameter at active area of the cell. All surface was drilled with 275 µm diameter holes spaced by 275 µm. The machining processes were carried out under ambient conditions. The microstructural effects of the femtosecond laser treatment on the electrolyte surface were investigated prior to the electrochemical characterisation using a scanning electron microscope (SEM) Quanta 200 FEG (FEI). The Novo control Alpha-A was used for electrochemical impedance spectroscopy on a symmetrical cell configuration with an excitation amplitude of 25 mV and a frequency range of 1 MHz to 0.1 Hz. The fuel cell characterization of the cell was examined on open flanges test setup by Fiaxell. Using nickel mesh on the anode side and au mesh on the cathode side, the cell was electrically linked. The cell was placed in a Kittec furnace with a Process IDentifier temperature controller. The wires were connected to a Solartron 1260/1287 frequency analyzer for the impedance and current-voltage characterization. In order to determine the impact of the anode's microstructure on the performance of the commercial cells, the acquired results were compared to cells with unstructured anode. Geometrical studies verified that the depth of the -holes increased linearly according to laser energy and scanning times. On the other hand, it reduced as the scanning speed increased. The electrochemical analysis demonstrates that the open circuit voltage OCV values of the two cells are equal. Further, the modified cell's initial slope reduces to 0.209 from 0.253 of the unmodified cell, revealing that the surface modification considerably decreases energy loss. Plus, the maximum power density for the cell with the microstructure and the reference cell respectively, are 1.45 and 1.16 Wcm⁻².

Keywords: electrochemical performance, electrolyte-supported cells, laser micro-structuring, solid oxide fuel cells

Procedia PDF Downloads 44
235 Food Design as a University-Industry Collaboration Project: An Experience Design on Controlling Chocolate Consumption and Long-Term Eating Behavior

Authors: Büşra Durmaz, Füsun Curaoğlu

Abstract:

While technology-oriented developments in the modern world change our perceptions of time and speed, they also force our food consumption patterns, such as getting pleasure from what we eat and eating slowly. The habit of eating quickly and hastily causes not only the feeling of not understanding the taste of the food eaten but also the inability to postpone the feeling of satiety and, therefore, many health problems. In this context, especially in the last ten years, in the field of industrial design, food manufacturers for healthy living and consumption have been collaborating with industrial designers on food design. The consumers of the new century, who are in an uncontrolled time intensity, receive support from small snacks as a source of happiness and pleasure in the little time intervals they can spare. At this point, especially chocolate has been a source of happiness for its consumers as a source of both happiness and pleasure for hundreds of years. However, when the portions have eaten cannot be controlled, a pleasure food such as chocolate can cause both health problems and many emotional problems, especially the feeling of guilt. Fast food, which is called food that is prepared and consumed quickly, has been increasing rapidly around the world in recent years. This study covers the process and results of a chocolate design based on the user experience of a university-industry cooperation project carried out within the scope of Eskişehir Technical University graduation projects. The aim of the project is a creative product design that will enable the user to experience chocolate consumption with a healthy eating approach. For this, while concepts such as pleasure, satiety, and taste are discussed; A survey with 151 people and semi-structured face-to-face interviews with 7 people during the experience design process within the scope of the user-oriented design approach, mainly literature review, within the scope of main topics such as mouth anatomy, tongue structure, taste, the functions of the eating action in the brain, hormones and chocolate, video A case study based on the research paradigm of Qualitative Research was structured within the scope of different research processes such as analysis and project diaries. As a result of the research, it has been reached that the melting in the mouth is the preferred experience of the users in order to spread the experience of eating chocolate for a long time based on pleasure while eating chocolate with healthy portions. In this context, researches about the production of sketches, mock-ups and prototypes of the product are included in the study. As a result, a product packaging design has been made that supports the active role of the senses such as sight, smell and hearing, where consumption begins, in order to consume chocolate by melting and to actively secrete the most important stimulus salivary glands in order to provide a healthy and long-term pleasure-based consumption.

Keywords: chocolate, eating habit, pleasure, saturation, sense of taste

Procedia PDF Downloads 55
234 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space

Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari

Abstract:

Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.

Keywords: amino acid, head space, gas chromatography, total error

Procedia PDF Downloads 122
233 The Solid-Phase Sensor Systems for Fluorescent and SERS-Recognition of Neurotransmitters for Their Visualization and Determination in Biomaterials

Authors: Irina Veselova, Maria Makedonskaya, Olga Eremina, Alexandr Sidorov, Eugene Goodilin, Tatyana Shekhovtsova

Abstract:

Such catecholamines as dopamine, norepinephrine, and epinephrine are the principal neurotransmitters in the sympathetic nervous system. Catecholamines and their metabolites are considered to be important markers of socially significant diseases such as atherosclerosis, diabetes, coronary heart disease, carcinogenesis, Alzheimer's and Parkinson's diseases. Currently, neurotransmitters can be studied via electrochemical and chromatographic techniques that allow their characterizing and quantification, although these techniques can only provide crude spatial information. Besides, the difficulty of catecholamine determination in biological materials is associated with their low normal concentrations (~ 1 nM) in biomaterials, which may become even one more order lower because of some disorders. In addition, in blood they are rapidly oxidized by monoaminooxidases from thrombocytes and, for this reason, the determination of neurotransmitter metabolism indicators in an organism should be very rapid (15—30 min), especially in critical states. Unfortunately, modern instrumental analysis does not offer a complex solution of this problem: despite its high sensitivity and selectivity, HPLC-MS cannot provide sufficiently rapid analysis, while enzymatic biosensors and immunoassays for the determination of the considered analytes lack sufficient sensitivity and reproducibility. Fluorescent and SERS-sensors remain a compelling technology for approaching the general problem of selective neurotransmitter detection. In recent years, a number of catecholamine sensors have been reported including RNA aptamers, fluorescent ribonucleopeptide (RNP) complexes, and boronic acid based synthetic receptors and the sensor operated in a turn-off mode. In this work we present the fluorescent and SERS turn-on sensor systems based on the bio- or chemorecognizing nanostructured films {chitosan/collagen-Tb/Eu/Cu-nanoparticles-indicator reagents} that provide the selective recognition, visualization, and sensing of the above mentioned catecholamines on the level of nanomolar concentrations in biomaterials (cell cultures, tissue etc.). We have (1) developed optically transparent porous films and gels of chitosan/collagen; (2) ensured functionalization of the surface by molecules-'recognizers' (by impregnation and immobilization of components of the indicator systems: biorecognizing and auxiliary reagents); (3) performed computer simulation for theoretical prediction and interpretation of some properties of the developed materials and obtained analytical signals in biomaterials. We are grateful for the financial support of this research from Russian Foundation for Basic Research (grants no. 15-03-05064 a, and 15-29-01330 ofi_m).

Keywords: biomaterials, fluorescent and SERS-recognition, neurotransmitters, solid-phase turn-on sensor system

Procedia PDF Downloads 378
232 Soybean Oil Based Phase Change Material for Thermal Energy Storage

Authors: Emre Basturk, Memet Vezir Kahraman

Abstract:

In many developing countries, with the rapid economic improvements, energy shortage and environmental issues have become a serious problem. Therefore, it has become a very critical issue to improve energy usage efficiency and also protect the environment. Thermal energy storage system is an essential approach to match the thermal energy claim and supply. Thermal energy can be stored by heating, cooling or melting a material with the energy and then enhancing accessible when the procedure is reversed. The overall thermal energy storage techniques are sorted as; latent heat or sensible heat thermal energy storage technology segments. Among these methods, latent heat storage is the most effective method of collecting thermal energy. Latent heat thermal energy storage depend on the storage material, emitting or discharging heat as it undergoes a solid to liquid, solid to solid or liquid to gas phase change or vice versa. Phase change materials (PCMs) are promising materials for latent heat storage applications due to their capacities to accumulate high latent heat storage per unit volume by phase change at an almost constant temperature. Phase change materials (PCMs) are being utilized to absorb, collect and discharge thermal energy during the cycle of melting and freezing, converting from one phase to another. Phase Change Materials (PCMs) can generally be arranged into three classes: organic materials, salt hydrates and eutectics. Many kinds of organic and inorganic PCMs and their blends have been examined as latent heat storage materials. Organic PCMs are rather expensive and they have average latent heat storage per unit volume and also have low density. Most organic PCMs are combustible in nature and also have a wide range of melting point. Organic PCMs can be categorized into two major categories: non-paraffinic and paraffin materials. Paraffin materials have been extensively used, due to their high latent heat and right thermal characteristics, such as minimal super cooling, varying phase change temperature, low vapor pressure while melting, good chemical and thermal stability, and self-nucleating behavior. Ultraviolet (UV)-curing technology has been generally used because it has many advantages, such as low energy consumption , high speed, high chemical stability, room-temperature operation, low processing costs and environmental friendly. For many years, PCMs have been used for heating and cooling industrial applications including textiles, refrigerators, construction, transportation packaging for temperature-sensitive products, a few solar energy based systems, biomedical and electronic materials. In this study, UV-curable, fatty alcohol containing soybean oil based phase change materials (PCMs) were obtained and characterized. The phase transition behaviors and thermal stability of the prepared UV-cured biobased PCMs were analyzed by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). The heating process phase change enthalpy is measured between 30 and 68 J/g, and the freezing process phase change enthalpy is found between 18 and 70 J/g. The decomposition of UVcured PCMs started at 260 ºC and reached a maximum of 430 ºC.

Keywords: fatty alcohol, phase change material, thermal energy storage, UV curing

Procedia PDF Downloads 347
231 Comparative Vector Susceptibility for Dengue Virus and Their Co-Infection in A. aegypti and A. albopictus

Authors: Monika Soni, Chandra Bhattacharya, Siraj Ahmed Ahmed, Prafulla Dutta

Abstract:

Dengue is now a globally important arboviral disease. Extensive vector surveillance has already established A.aegypti as a primary vector, but A.albopictus is now accelerating the situation through gradual adaptation to human surroundings. Global destabilization and gradual climatic shift with rising in temperature have significantly expanded the geographic range of these species These versatile vectors also host Chikungunya, Zika, and yellow fever virus. Biggest challenge faced by endemic countries now is upsurge in co-infection reported with multiple serotypes and virus co-circulation. To foster vector control interventions and mitigate disease burden, there is surge for knowledge on vector susceptibility and viral tolerance in response to multiple infections. To address our understanding on transmission dynamics and reproductive fitness, both the vectors were exposed to single and dual combinations of all four dengue serotypes by artificial feeding and followed up to third generation. Artificial feeding observed significant difference in feeding rate for both the species where A.albopictus was poor artificial feeder (35-50%) compared to A.aegypti (95-97%) Robust sequential screening of viral antigen in mosquitoes was followed by Dengue NS1 ELISA, RT-PCR and Quantitative PCR. To observe viral dissemination in different mosquito tissues Indirect immunofluorescence assay was performed. Result showed that both the vectors were infected initially with all dengue(1-4)serotypes and its co-infection (D1 and D2, D1 and D3, D1 and D4, D2 and D4) combinations. In case of DENV-2 there was significant difference in the peak titer observed at 16th day post infection. But when exposed to dual infections A.aegypti supported all combinations of virus where A.albopictus only continued single infections in successive days. There was a significant negative effect on the fecundity and fertility of both the vectors compared to control (PANOVA < 0.001). In case of dengue 2 infected mosquito, fecundity in parent generation was significantly higher (PBonferroni < 0.001) for A.albopicus compare to A.aegypti but there was a complete loss of fecundity from second to third generation for A.albopictus. It was observed that A.aegypti becomes infected with multiple serotypes frequently even at low viral titres compared to A.albopictus. Possible reason for this could be the presence of wolbachia infection in A.albopictus or mosquito innate immune response, small RNA interference etc. Based on the observations it could be anticipated that transovarial transmission may not be an important phenomenon for clinical disease outcome, due to the absence of viral positivity by third generation. Also, Dengue NS1 ELISA can be used for preliminary viral detection in mosquitoes as more than 90% of the samples were found positive compared to RT-PCR and viral load estimation.

Keywords: co-infection, dengue, reproductive fitness, viral quantification

Procedia PDF Downloads 178
230 Working Memory and Audio-Motor Synchronization in Children with Different Degrees of Central Nervous System's Lesions

Authors: Anastasia V. Kovaleva, Alena A. Ryabova, Vladimir N. Kasatkin

Abstract:

Background: The most simple form of entrainment to a sensory (typically auditory) rhythmic stimulus involves perceiving and synchronizing movements with an isochronous beat with one level of periodicity, such as that produced by a metronome. Children with pediatric cancer usually treated with chemo- and radiotherapy. Because of such treatment, psychologists and health professionals declare cognitive and motor abilities decline in cancer patients. The purpose of our study was to measure working memory characteristics with association with audio-motor synchronization tasks, also involved some memory resources, in children with different degrees of central nervous system lesions: posterior fossa tumors, acute lymphoblastic leukemia, and healthy controls. Methods: Our sample consisted of three groups of children: children treated for posterior fossa tumors (PFT-group, n=42, mean age 12.23), children treated for acute lymphoblastic leukemia (ALL-group, n=11, mean age 11.57) and neurologically healthy children (control group, n=36, mean age 11.67). Participants were tested for working memory characteristics with Cambridge Neuropsychological Test Automated Battery (CANTAB). Pattern recognition memory (PRM) and spatial working memory (SWM) tests were applied. Outcome measures of PRM test include the number and percentage of correct trials and latency (speed of participant’s response), and measures of SWM include errors, strategy, and latency. In the synchronization tests, the instruction was to tap out a regular beat (40, 60, 90 and 120 beats per minute) in synchrony with the rhythmic sequences that were played. This meant that for the sequences with an isochronous beat, participants were required to tap into every auditory event. Variations of inter-tap-intervals and deviations of children’s taps from the metronome were assessed. Results: Analysis of variance revealed the significant effect of group (ALL, PFT and control) on such parameters as short-term PRM, SWM strategy and errors. Healthy controls demonstrated more correctly retained elements, better working memory strategy, compared to cancer patients. Interestingly that ALL patients chose the bad strategy, but committed significantly less errors in SWM test then PFT and controls did. As to rhythmic ability, significant associations of working memory were found out only with 40 bpm rhythm: the less variable were inter-tap-intervals of the child, the more elements in memory he/she could retain. The ability to audio-motor synchronization may be related to working memory processes mediated by the prefrontal cortex whereby each sensory event is actively retrieved and monitored during rhythmic sequencing. Conclusion: Our results suggest that working memory, tested with appropriate cognitive methods, is associated with the ability to synchronize movements with rhythmic sounds, especially in sub-second intervals (40 per minute).

Keywords: acute lymphoblastic leukemia (ALL), audio-motor synchronization, posterior fossa tumor, working memory

Procedia PDF Downloads 282