Search results for: emergency frequency regulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6213

Search results for: emergency frequency regulation

1173 Characterization and Monitoring of the Yarn Faults Using Diametric Fault System

Authors: S. M. Ishtiaque, V. K. Yadav, S. D. Joshi, J. K. Chatterjee

Abstract:

The DIAMETRIC FAULTS system has been developed that captures a bi-directional image of yarn continuously in sequentially manner and provides the detailed classification of faults. A novel mathematical framework developed on the acquired bi-directional images forms the basis of fault classification in four broad categories, namely, Thick1, Thick2, Thin and Normal Yarn. A discretised version of Radon transformation has been used to convert the bi-directional images into one-dimensional signals. Images were divided into training and test sample sets. Karhunen–Loève Transformation (KLT) basis is computed for the signals from the images in training set for each fault class taking top six highest energy eigen vectors. The fault class of the test image is identified by taking the Euclidean distance of its signal from its projection on the KLT basis for each sample realization and fault class in the training set. Euclidean distance applied using various techniques is used for classifying an unknown fault class. An accuracy of about 90% is achieved in detecting the correct fault class using the various techniques. The four broad fault classes were further sub classified in four sub groups based on the user set boundary limits for fault length and fault volume. The fault cross-sectional area and the fault length defines the total volume of fault. A distinct distribution of faults is found in terms of their volume and physical dimensions which can be used for monitoring the yarn faults. It has been shown from the configurational based characterization and classification that the spun yarn faults arising out of mass variation, exhibit distinct characteristics in terms of their contours, sizes and shapes apart from their frequency of occurrences.

Keywords: Euclidean distance, fault classification, KLT, Radon Transform

Procedia PDF Downloads 251
1172 A Comparison of Implant Stability between Implant Placed without Bone Graft versus with Bone Graft Using Guided Bone Regeneration (GBR) Technique: A Resonance Frequency Analysis

Authors: R. Janyaphadungpong, A. Pimkhaokham

Abstract:

This prospective clinical study determined the insertion torque (IT) value and monitored the changes in implant stability quotient (ISQ) values during the 12 weeks healing period from implant placement without bone graft (control group) and with bone graft using the guided bone regeneration (GBR) technique (study group). The relationship between the IT and ISQ values of the implants was also assessed. The control and study groups each consisted of 6 patients with 8 implants per group. The ASTRA TECH Implant System™ EV 4.2 mm in diameter was placed in the posterior mandibular region. In the control group, implants were placed in bone without bone graft, whereas in the study group implants were placed simultaneously with the GBR technique at favorable bone defect. IT (Ncm) of each implant was recorded when fully inserted. ISQ values were obtained from the Osstell® ISQ at the time of implant placement, and at 2, 4, 8, and 12 weeks. No difference in IT was found between groups (P = 0.320). The ISQ values in the control group were significantly higher than in the study group at the time of implant placement and at 4 weeks. There was no significant association between IT and ISQ values either at baseline or after the 12 weeks. At 12 weeks of healing, the control and study groups displayed different trends. Mean ISQ values for the control group decreased over the first 2 weeks and then started to increase. ISQ value increases were statistically significant at 8 weeks and later, whereas mean ISQ values in the study group decreased over the first 4 weeks and then started to increase, with statistical significance after 12 weeks. At 12 weeks, all implants achieved osseointegration with mean ISQ values over the threshold value (ISQ>70). These results indicated that implants, in which guided bone regeneration technique was performed during implant placement for treating favorable bone defects, were as predictable as implants placed without bone graft. However, loading in implants placed with the GBR technique for correcting favorable bone defects should be performed after 12 weeks of healing to ensure implant stability and osseointegration.

Keywords: dental implant, favorable bone defect, guided bone regeneration technique, implant stability

Procedia PDF Downloads 284
1171 Traumatic Brain Injury Induced Lipid Profiling of Lipids in Mice Serum Using UHPLC-Q-TOF-MS

Authors: Seema Dhariwal, Kiran Maan, Ruchi Baghel, Apoorva Sharma, Poonam Rana

Abstract:

Introduction: Traumatic brain injury (TBI) is defined as the temporary or permanent alteration in brain function and pathology caused by an external mechanical force. It represents the leading cause of mortality and morbidity among children and youth individuals. Various models of TBI in rodents have been developed in the laboratory to mimic the scenario of injury. Blast overpressure injury is common among civilians and military personnel, followed by accidents or explosive devices. In addition to this, the lateral Controlled cortical impact (CCI) model mimics the blunt, penetrating injury. Method: In the present study, we have developed two different mild TBI models using blast and CCI injury. In the blast model, helium gas was used to create an overpressure of 130 kPa (±5) via a shock tube, and CCI injury was induced with an impact depth of 1.5mm to create diffusive and focal injury, respectively. C57BL/6J male mice (10-12 weeks) were divided into three groups: (1) control, (2) Blast treated, (3) CCI treated, and were exposed to different injury models. Serum was collected on Day1 and day7, followed by biphasic extraction using MTBE/Methanol/Water. Prepared samples were separated on Charged Surface Hybrid (CSH) C18 column and acquired on UHPLC-Q-TOF-MS using ESI probe with inhouse optimized parameters and method. MS peak list was generated using Markerview TM. Data were normalized, Pareto-scaled, and log-transformed, followed by multivariate and univariate analysis in metaboanalyst. Result and discussion: Untargeted profiling of lipids generated extensive data features, which were annotated through LIPID MAPS® based on their m/z and were further confirmed based on their fragment pattern by LipidBlast. There is the final annotation of 269 features in the positive and 182 features in the negative mode of ionization. PCA and PLS-DA score plots showed clear segregation of injury groups to controls. Among various lipids in mild blast and CCI, five lipids (Glycerophospholipids {PC 30:2, PE O-33:3, PG 28:3;O3 and PS 36:1 } and fatty acyl { FA 21:3;O2}) were significantly altered in both injury groups at Day 1 and Day 7, and also had VIP score >1. Pathway analysis by Biopan has also shown hampered synthesis of Glycerolipids and Glycerophospholipiods, which coincides with earlier reports. It could be a direct result of alteration in the Acetylcholine signaling pathway in response to TBI. Understanding the role of a specific class of lipid metabolism, regulation and transport could be beneficial to TBI research since it could provide new targets and determine the best therapeutic intervention. This study demonstrates the potential lipid biomarkers which can be used for injury severity diagnosis and identification irrespective of injury type (diffusive or focal).

Keywords: LipidBlast, lipidomic biomarker, LIPID MAPS®, TBI

Procedia PDF Downloads 102
1170 Possibilities of Psychodiagnostics in the Context of Highly Challenging Situations in Military Leadership

Authors: Markéta Chmelíková, David Ullrich, Iva Burešová

Abstract:

The paper maps the possibilities and limits of diagnosing selected personality and performance characteristics of military leadership and psychology students in the context of coping with challenging situations. Individuals vary greatly inter-individually in their ability to effectively manage extreme situations, yet existing diagnostic tools are often criticized mainly for their low predictive power. Nowadays, every modern army focuses primarily on the systematic minimization of potential risks, including the prediction of desirable forms of behavior and the performance of military commanders. The context of military leadership is well known for its life-threatening nature. Therefore, it is crucial to research stress load in the specific context of military leadership for the purpose of possible anticipation of human failure in managing extreme situations of military leadership. The aim of the submitted pilot study, using an experiment of 24 hours duration, is to verify the possibilities of a specific combination of psychodiagnostic to predict people who possess suitable equipment for coping with increased stress load. In our pilot study, we conducted an experiment of 24 hours duration with an experimental group (N=13) in the bomb shelter and a control group (N=11) in a classroom. Both groups were represented by military leadership students (N=11) and psychology students (N=13). Both groups were equalized in terms of study type and gender. Participants were administered the following test battery of personality characteristics: Big Five Inventory 2 (BFI-2), Short Dark Triad (SD-3), Emotion Regulation Questionnaire (ERQ), Fatigue Severity Scale (FSS), and Impulsive Behavior Scale (UPPS-P). This test battery was administered only once at the beginning of the experiment. Along with this, they were administered a test battery consisting of the Test of Attention (d2) and the Bourdon test four times overall with 6 hours ranges. To better simulate an extreme situation – we tried to induce sleep deprivation - participants were required to try not to fall asleep throughout the experiment. Despite the assumption that a stay in an underground bomb shelter will manifest in impaired cognitive performance, this expectation has been significantly confirmed in only one measurement, which can be interpreted as marginal in the context of multiple testing. This finding is a fundamental insight into the issue of stress management in extreme situations, which is crucial for effective military leadership. The results suggest that a 24-hour stay in a shelter, together with sleep deprivation, does not seem to simulate sufficient stress for an individual, which would be reflected in the level of cognitive performance. In the context of these findings, it would be interesting in future to extend the diagnostic battery with physiological indicators of stress, such as: heart rate, stress score, physical stress, mental stress ect.

Keywords: bomb shelter, extreme situation, military leadership, psychodiagnostic

Procedia PDF Downloads 82
1169 Teachers' Attitude and Knowledge as Predictors of Effective Use of Digital Devices for the Education of Students with Special Needs in Oyo, Nigeria

Authors: Faseluka Olamide Tope

Abstract:

Giving quality education to students with special needs requires that all necessary resources should be harnessed and digital devices has become important part of resources used as instructional materials in educating students with special needs. Teachers who will make use of these technologies are considered as a part of the most important elements in any educational programme and the effective usage of these technologies largely depends on them. Out of numerous determinants of the effective use of these digital devices, this study examines teachers’ attitude and knowledge as predictors of effective use of digital technology for education of special needs student in Oyo state, Nigeria. The descriptive survey research design of the expo-facto type was adopted for the study, using simple random sampling technique. The study was carried out among sixty (60) participants. Two research questions and two research hypotheses were formulated and used. The data collected through the research instruments for the study were analysedusing frequency, percentage, mean and standard deviation, Pearson, Product, Moment Correlation (PPMC) and Multiple Regression Analysis. The study revealed a significant relationship between teachers attitude (50, < 0.05) and effective use of digital technologies for special needs students. Furthermore, there was a significant contribution F (F=4.289; R=0.876 and R2 =0.758) in the joint contribution of the independent variable  (teacher’s attitude and teacher’s knowledge) and dependent variable (effective use of digital technologies) while teachers knowledge have the highest contribution(b=7.926, t=4.376), the study therefore revealed that teachers attitude and knowledge are potent factors that predicts the effective usage of digital technologies for the education of special needs student. The study recommended that due to the ever-changing nature of technology which comes with new features, teachers should be equipped with appropriate knowledge in order to effectively make use of them and teachers should also develop right attitude toward the use of digital technologies

Keywords: teachers’ knowledge, teachers’ attitude, digital devices, special needs students

Procedia PDF Downloads 14
1168 A Galectin from Rock Bream Oplegnathus fasciatus: Molecular Characterization and Immunological Properties

Authors: W. S. Thulasitha, N. Umasuthan, G. I. Godahewa, Jehee Lee

Abstract:

In fish, innate immune defense is the first immune response against microbial pathogens which consists of several antimicrobial components. Galectins are one of the carbohydrate binding lectins that have the ability to identify pathogen by recognition of pathogen associated molecular patterns. Galectins play a vital role in the regulation of innate and adaptive immune responses. Rock bream Oplegnathus fasciatus is one of the most important cultured species in Korea and Japan. Considering the losses due to microbial pathogens, present study was carried out to understand the molecular and functional characteristics of a galectin in normal and pathogenic conditions, which could help to establish an understanding about immunological components of rock bream. Complete cDNA of rock bream galectin like protein B (rbGal like B) was identified from the cDNA library, and the in silico analysis was carried out using bioinformatic tools. Genomic structure was derived from the BAC library by sequencing a specific clone and using Spidey. Full length of rbGal like B (contig14775) cDNA containing 517 nucleotides was identified from the cDNA library which comprised of 435 bp in the open reading frame encoding a deduced protein composed of 145 amino acids. The molecular mass of putative protein was predicted as 16.14 kDa with an isoelectric point of 8.55. A characteristic conserved galactose binding domain was located from 12 to 145 amino acids. Genomic structure of rbGal like B consisted of 4 exons and 3 introns. Moreover, pairwise alignment showed that rock bream rbGal like B shares highest similarity (95.9 %) and identity (91 %) with Takifugu rubripes galectin related protein B like and lowest similarity (55.5 %) and identity (32.4 %) with Homo sapiens. Multiple sequence alignment demonstrated that the galectin related protein B was conserved among vertebrates. A phylogenetic analysis revealed that rbGal like B protein clustered together with other fish homologs in fish clade. It showed closer evolutionary link with Takifugu rubripes. Tissue distribution and expression patterns of rbGal like B upon immune challenges were performed using qRT-PCR assays. Among all tested tissues, level of rbGal like B expression was significantly high in gill tissue followed by kidney, intestine, heart and spleen. Upon immune challenges, it showed an up-regulated pattern of expression with Edwardsiella tarda, rock bream irido virus and poly I:C up to 6 h post injection and up to 24 h with LPS. However, In the presence of Streptococcus iniae rbGal like B showed an up and down pattern of expression with the peak at 6 - 12 h. Results from the present study revealed the phylogenetic position and role of rbGal like B in response to microbial infection in rock bream.

Keywords: galectin like protein B, immune response, Oplegnathus fasciatus, molecular characterization

Procedia PDF Downloads 340
1167 Voice of Customer: Mining Customers' Reviews on On-Line Car Community

Authors: Kim Dongwon, Yu Songjin

Abstract:

This study identifies the business value of VOC (Voice of Customer) on the business. Precisely, we intend to demonstrate how much negative and positive sentiment of VOC has an influence on car sales market share in the unites states. We extract 7 emotions such as sadness, shame, anger, fear, frustration, delight and satisfaction from the VOC data, 23,204 pieces of opinions, that had been posted on car-related on-line community from 2007 to 2009(a part of data collection from 2007 to 2015), and intend to clarify the correlation between negative and positive sentimental keywords and contribution to market share. In order to develop a lexicon for each category of negative and positive sentiment, we took advantage of Corpus program, Antconc 3.4.1.w and on-line sentimental data, SentiWordNet and identified the part of speech(POS) information of words in the customers' opinion by using a part-of-speech tagging function provided by TextAnalysisOnline. For the purpose of this present study, a total of 45,741 pieces of customers' opinions of 28 car manufacturing companies had been collected including titles and status information. We conducted an experiment to examine whether the inclusion, frequency and intensity of terms with negative and positive emotions in each category affect the adoption of customer opinions for vehicle organizations' market share. In the experiment, we statistically verified that there is correlation between customer ideas containing negative and positive emotions and variation of marker share. Particularly, "Anger," a domain of negative domains, is significantly influential to car sales market share. The domain "Delight" and "Satisfaction" increased in proportion to growth of market share.

Keywords: data mining, opinion mining, sentiment analysis, VOC

Procedia PDF Downloads 201
1166 Precursor Synthesis of Carbon Materials with Different Aggregates Morphologies

Authors: Nikolai A. Khlebnikov, Vladimir N. Krasilnikov, Evgenii V. Polyakov, Anastasia A. Maltceva

Abstract:

Carbon materials with advanced surfaces are widely used both in modern industry and in environmental protection. The physical-chemical nature of these materials is determined by the morphology of primary atomic and molecular carbon structures, which are the basis for synthesizing the following materials: zero-dimensional (fullerenes), one-dimensional (fiber, tubes), two-dimensional (graphene) carbon nanostructures, three-dimensional (multi-layer graphene, graphite, foams) with unique physical-chemical and functional properties. Experience shows that the microscopic morphological level is the basis for the creation of the next mesoscopic morphological level. The dependence of the morphology on the chemical way and process prehistory (crystallization, colloids formation, liquid crystal state and other) is the peculiarity of the last called level. These factors determine the consumer properties of carbon materials, such as specific surface area, porosity, chemical resistance in corrosive environments, catalytic and adsorption activities. Based on the developed ideology of thin precursor synthesis, the authors discuss one of the approaches of the porosity control of carbon-containing materials with a given aggregates morphology. The low-temperature thermolysis of precursors in a gas environment of a given composition is the basis of the above-mentioned idea. The processes of carbothermic precursor synthesis of two different compounds: tungsten carbide WC:nC and zinc oxide ZnO:nC containing an impurity phase in the form of free carbon were selected as subjects of the research. In the first case, the transition metal (tungsten) forming carbides was the object of the synthesis. In the second case, there was selected zinc that does not form carbides. The synthesis of both kinds of transition metals compounds was conducted by the method of precursor carbothermic synthesis from the organic solution. ZnO:nC composites were obtained by thermolysis of succinate Zn(OO(CH2)2OO), formate glycolate Zn(HCOO)(OCH2CH2O)1/2, glycerolate Zn(OCH2CHOCH2OH), and tartrate Zn(OOCCH(OH)CH(OH)COO). WC:nC composite was synthesized from ammonium paratungstate and glycerol. In all cases, carbon structures that are specific for diamond- like carbon forms appeared on the surface of WC and ZnO particles after the heat treatment. Tungsten carbide and zinc oxide were removed from the composites by selective chemical dissolution preserving the amorphous carbon phase. This work presents the results of investigating WC:nC and ZnO:nC composites and carbon nanopowders with tubular, tape, plate and onion morphologies of aggregates that are separated by chemical dissolution of WC and ZnO from the composites by the following methods: SEM, TEM, XPA, Raman spectroscopy, and BET. The connection between the carbon morphology under the conditions of synthesis and chemical nature of the precursor and the possibility of regulation of the morphology with the specific surface area up to 1700-2000 m2/g of carbon-structured materials are discussed.

Keywords: carbon morphology, composite materials, precursor synthesis, tungsten carbide, zinc oxide

Procedia PDF Downloads 316
1165 A Critical Discourse Analysis of the Construction of Artists' Reputation by Online Art Magazines

Authors: Thomas Soro, Tim Stott, Brendan O'Rourke

Abstract:

The construction of artistic reputation has been examined within sociology, philosophy, and economics but, baring a few noteworthy exceptions its discursive aspect has been largely ignored. This is particularly surprising given that contemporary artworks primarily rely on discourse to construct their ontological status. This paper contributes a discourse analytical perspective to the broad body of literature on artistic reputation by providing an understanding of how it is discursively constructed within the institutional context of online contemporary art magazines. This paper uses corpora compiled from the websites of e-flux and ARTnews, two leading online contemporary art magazines, to examine how these organisations discursively construct the reputation of artists. By constructing word-sketches of the term 'Artist', the paper identified the most significant modifiers attributed to artists and the most significant verbs which have 'artist' as an object or subject. The most significant results were analysed through concordances and demonstrated a somewhat surprising lack of evaluative representation. To examine this feature more closely, the paper then analysed three announcement texts from e-flux’s site and three review texts from ARTnews' site, comparing the use of modifiers and verbs in the representation of artists, artworks, and institutions. The results of this analysis support the corpus findings, suggesting that artists are rarely represented in evaluative terms. Based on the relatively high frequency of evaluation in the representation of artworks and institutions, these results suggest that there may be discursive norms at work in the field of online contemporary art magazines which regulate the use of verbs and modifiers in the evaluation of artists.

Keywords: contemporary art, corpus linguistics, critical discourse analysis, symbolic capital

Procedia PDF Downloads 145
1164 Passive Voice in SLA: Armenian Learners’ Case Study

Authors: Emma Nemishalyan

Abstract:

It is believed that learners’ mother tongue (L1 hereafter) has a huge impact on their second language acquisition (L2 hereafter). This hypothesis has been exposed to both positive and negative criticism. Based on research results of a wide range of learners’ corpora (Chinese, Japanese, Spanish among others) the hypothesis has either been proved or disproved. However, no such study has been conducted on the Armenian learners. The aim of this paper is to understand the implication of the hypothesis on the Armenian learners’ corpus in terms of the use of the passive voice. To this end, the method of Contrastive Interlanguage Analysis (hereafter CIA) has been used on native speakers’ corpus (Louvain Corpus of Native English Essays (LOCNESS)) and Armenian learners’ corpus which has been compiled by me in compliance with International Corpus of Learner English (ICLE) guidelines. CIA compares the interlanguage (the language produced by learners) with the one produced by native speakers. With the help of this method, it is possible not only to highlight the mistakes that learners make, but also to underline the under or overuses. The choice of the grammar issue (passive voice) is conditioned by the fact that typologically Armenian and English are drastically different as they belong to different branches. Moreover, the passive voice is considered to be one of the most problematic grammar topics to be acquired by learners of the English language. Based on this difference, we hypothesized that Armenian learners would either overuse or underuse some types of the passive voice. With the help of Lancsbox software, we have identified the frequency rates of passive voice usage in LOCNESS and Armenian learners’ corpus to understand whether the latter have the same usage pattern of the passive voice as the native speakers. Secondly, we have identified the types of the passive voice used by the Armenian leaners trying to track down the reasons in their mother tongue. The results of the study showed that Armenian learners underused the passive voices in contrast to native speakers. Furthermore, the hypothesis that learners’ L1 has an impact on learners’ L2 acquisition and production was proved.

Keywords: corpus linguistics, applied linguistics, second language acquisition, corpus compilation

Procedia PDF Downloads 82
1163 Vulnerability and Risk Assessment, and Preparedness to Natural Disasters of Schools in Southern Leyte, Philippines

Authors: Lorifel Hinay

Abstract:

Natural disasters have increased in frequency and severity in the Philippines over the years resulting to detrimental impacts in school properties and lives of learners. The topography of the Province of Southern Leyte is a hotspot for inevitable natural disaster-causing hazards that could affect schools, cripple the educational system and cause environmental, cultural and social detrimental impacts making Disaster Risk Reduction and Management (DRRM) an indispensable platform to keep learners safe, secure and resilient. This study determined the schools’ vulnerability and risk assessment to earthquake, landslide, flood, storm surge and tsunami hazards, and its relationship to status in disaster preparedness. Descriptive-correlational research design was used where the respondents were School DRRM Coordinators/School Administrators and Municipal DRRM Officers. It was found that schools’ vulnerability and risk were high in landslide, medium in earthquake, and low in flood, storm surge and tsunami. Though schools were moderately prepared in disasters across all hazards, they were less accomplished in group organization and property security. Less planning preparation and less implementation of DRRM measures were observed in schools highly at risk of earthquake and landslide. Also, schools vulnerable to landslide and flood have very high property security. Topography and location greatly contributed to schools’ vulnerability to hazards, thus, a school-based disaster preparedness plan is hoped to help ensure that hazard-exposed schools can build a culture of safety, disaster resiliency and education continuity.

Keywords: disaster risk reduction and management, earthquake, flood, landslide, storm surge, tsunami

Procedia PDF Downloads 106
1162 Improving Fingerprinting-Based Localization (FPL) System Using Generative Artificial Intelligence (GAI)

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 26
1161 Detection of Egg Proteins in Food Matrices (2011-2021)

Authors: Daniela Manila Bianchi, Samantha Lupi, Elisa Barcucci, Sandra Fragassi, Clara Tramuta, Lucia Decastelli

Abstract:

Introduction: The undeclared allergens detection in food products plays a fundamental role in the safety of the allergic consumer. The protection of allergic consumers is guaranteed, in Europe, by Regulation (EU) No 1169/2011 of the European Parliament, which governs the consumer's right to information and identifies 14 food allergens to be mandatorily indicated on food labels: among these, an egg is included. An egg can be present as an ingredient or as contamination in raw and cooked products. The main allergen egg proteins are ovomucoid, ovalbumin, lysozyme, and ovotransferrin. This study presents the results of a survey conducted in Northern Italy aimed at detecting the presence of undeclared egg proteins in food matrices in the latest ten years (2011-2021). Method: In the period January 2011 - October 2021, a total of 1205 different types of food matrices (ready-to-eat, meats, and meat products, bakery and pastry products, baby foods, food supplements, pasta, fish and fish products, preparations for soups and broths) were delivered to Food Control Laboratory of Istituto Zooprofilattico Sperimentale of Piemonte Liguria and Valle d’Aosta to be analyzed as official samples in the frame of Regional Monitoring Plan of Food Safety or in the contest of food poisoning. The laboratory is ISO 17025 accredited, and since 2019, it has represented the National Reference Centre for the detection in foods of substances causing food allergies or intolerances (CreNaRiA). All samples were stored in the laboratory according to food business operator instructions and analyzed within the expiry date for the detection of undeclared egg proteins. Analyses were performed with RIDASCREEN®FAST Ei/Egg (R-Biopharm ® Italia srl) kit: the method was internally validated and accredited with a Limit of Detection (LOD) equal to 2 ppm (mg/Kg). It is a sandwich enzyme immunoassay for the quantitative analysis of whole egg powder in foods. Results: The results obtained through this study showed that egg proteins were found in 2% (n. 28) of food matrices, including meats and meat products (n. 16), fish and fish products (n. 4), bakery and pastry products (n. 4), pasta (n. 2), preparations for soups and broths (n.1) and ready-to-eat (n. 1). In particular, in 2011 egg proteins were detected in 5% of samples, in 2012 in 4%, in 2013, 2016 and 2018 in 2%, in 2014, 2015 and 2019 in 3%. No egg protein traces were detected in 2017, 2020, and 2021. Discussion: Food allergies occur in the Western World in 2% of adults and up to 8% of children. Allergy to eggs is one of the most common food allergies in the pediatrics context. The percentage of positivity obtained from this study is, however, low. The trend over the ten years has been slightly variable, with comparable data.

Keywords: allergens, food, egg proteins, immunoassay

Procedia PDF Downloads 121
1160 Changing the Landscape of Fungal Genomics: New Trends

Authors: Igor V. Grigoriev

Abstract:

Understanding of biological processes encoded in fungi is instrumental in addressing future food, feed, and energy demands of the growing human population. Genomics is a powerful and quickly evolving tool to understand these processes. The Fungal Genomics Program of the US Department of Energy Joint Genome Institute (JGI) partners with researchers around the world to explore fungi in several large scale genomics projects, changing the fungal genomics landscape. The key trends of these changes include: (i) rapidly increasing scale of sequencing and analysis, (ii) developing approaches to go beyond culturable fungi and explore fungal ‘dark matter,’ or unculturables, and (iii) functional genomics and multi-omics data integration. Power of comparative genomics has been recently demonstrated in several JGI projects targeting mycorrhizae, plant pathogens, wood decay fungi, and sugar fermenting yeasts. The largest JGI project ‘1000 Fungal Genomes’ aims at exploring the diversity across the Fungal Tree of Life in order to better understand fungal evolution and to build a catalogue of genes, enzymes, and pathways for biotechnological applications. At this point, at least 65% of over 700 known families have one or more reference genomes sequenced, enabling metagenomics studies of microbial communities and their interactions with plants. For many of the remaining families no representative species are available from culture collections. To sequence genomes of unculturable fungi two approaches have been developed: (a) sequencing DNA from fruiting bodies of ‘macro’ and (b) single cell genomics using fungal spores. The latter has been tested using zoospores from the early diverging fungi and resulted in several near-complete genomes from underexplored branches of the Fungal Tree, including the first genomes of Zoopagomycotina. Genome sequence serves as a reference for transcriptomics studies, the first step towards functional genomics. In the JGI fungal mini-ENCODE project transcriptomes of the model fungus Neurospora crassa grown on a spectrum of carbon sources have been collected to build regulatory gene networks. Epigenomics is another tool to understand gene regulation and recently introduced single molecule sequencing platforms not only provide better genome assemblies but can also detect DNA modifications. For example, 6mC methylome was surveyed across many diverse fungi and the highest among Eukaryota levels of 6mC methylation has been reported. Finally, data production at such scale requires data integration to enable efficient data analysis. Over 700 fungal genomes and other -omes have been integrated in JGI MycoCosm portal and equipped with comparative genomics tools to enable researchers addressing a broad spectrum of biological questions and applications for bioenergy and biotechnology.

Keywords: fungal genomics, single cell genomics, DNA methylation, comparative genomics

Procedia PDF Downloads 190
1159 The Molecular Mechanism of Vacuolar Function in Yeast Cell Homeostasis

Authors: Chang-Hui Shen, Paulina Konarzewska

Abstract:

Cell homeostasis is regulated by vacuolar activity and it has been shown that lipid composition of the vacuole plays an important role in vacuolar function. The major phosphoinositide species present in the vacuolar membrane include phosphatidylinositol 3,5-biphosphate (PI(3,5)P₂) which is generated from PI(3)P controlled by Fab1p. Deletion of FAB1 gene reduce the synthesis of PI(3,5)P₂ and thus result in enlarged or fragmented vacuoles, with neutral vacuolar pH due to reduced vacuolar H⁺-ATPase activity. These mutants also exhibited poor growth at high extracellular pH and in the presence of CaCl₂. Conversely, VPS34 regulates the synthesis of PI(3)P from phosphatidylinositol (PI), and the lack of Vps34p results in the reduction of vacuolar activity. Although the cellular observations are clear, it is still unknown about the molecular mechanism between the phospholipid biosynthesis pathway and vacuolar activity. Since both VPS34 and FAB1 are important in vacuolar activity, we hypothesize that the molecular mechanism of vacuolar function might be regulated by the transcriptional regulators of phospholipid biosynthesis. In this study, we study the role of the major phospholipid biosynthesis transcription factor, INO2, in the regulation of vacuolar activity. We first performed qRT-PCR to examine the effect of Ino2p on the expression of VPS34 and FAB1. Our results showed that VPS34 was upregulated in the presence of inositol for both WT and ino2Δ cells. However, FAB1 was only upregulated significantly in ino2Δ cells. This indicated that Ino2p might be the negative regulator for FAB1 expression. Next, growth sensitivity experiment showed that WT, vma3Δ, and ino2Δ grew well in growth medium buffered to pH 5.5 containing 10 mM CaCl₂. As cells were switched to growth medium buffered to pH 7 containing CaCl₂ WT, ino2Δ and opi1Δ showed growth reduction, whereas vma3Δ was completely nonviable. As the concentration of CaCl₂ was increased to 60 mM, ino2Δ cells showed moderate growth reduction compared to WT. This result suggests that ino2Δ cells have better vacuolar activity. Microscopic analysis and vacuolar acidification were employed to further elucidate the importance of INO2 in vacuolar homeostasis. Analysis of vacuolar morphology indicated that WT and vma3Δ cells displayed vacuoles that occupied a small area of the cell when grown in media buffered to pH 5.5. Whereas, ino2Δ displayed fragmented vacuoles. On the other hand, all strains grown in media buffered to pH 7, exhibited enlarged vacuoles that occupied most of the cell’s surface. This indicated that the presence of INO2 may play negative effect in vacuolar morphology when cells are grown in media buffered to pH 5.5. Furthermore, vacuolar acidification assay showed that only vma3Δ cells displayed notably less acidic vacuoles as cells were grown in media buffered to pH 5.5 and pH 7. Whereas, ino2Δ cells displayed more acidic pH compared to WT at pH7. Taken together, our results demonstrated the molecular mechanism of the vacuolar activity regulated by the phospholipid biosynthesis transcription factors Ino2p. Ino2p negatively regulates vacuolar activity through the expression of FAB1.

Keywords: vacuole, phospholipid, homeostasis, Ino2p, FAB1

Procedia PDF Downloads 118
1158 Openness to Linguistic and Value Diversity as a Key Factor in the Development of a Learning Community

Authors: Caterina Calicchio, Talia Sbardella

Abstract:

The ability to move through geographical and symbolic spaces is key for building new nodes and social relationships. Especially in the framework of language learning, accepting and valuing diversity can help to create a constructive atmosphere of cooperation, innovation, and creativity. Thus, it is important to outline the stages of forming a learning community, focusing on the characteristics that can favor its development. It is known that elements like curiosity and motivation are significant for individual language learning; hence, the study attempts to investigate how factors like openness to diversity and cultural immersion could improve Italian learning and teaching. This paper aims to indicate the factors that could be significant for the development of a Learning Community by presenting a case study on a course on Italian as a second language for beginners: first, the theoretical matrices underlying social learning will be outlined. Secondly, a quantitative study will be described based on an adaptation of the openness to diversity and some insights psychometric scale questionnaire developed at the Umbra Institute. The questionnaire was delivered to 52 American college students with open-ended and closed-ended questions. Students were asked to specify their level of agreement to a set of statements on a six-point Likert scale ranging from (1) Strongly disagree to (6) Strongly agree. The data has been analyzed with a quantitative and qualitative method and has been represented in a pie chart and in a histogram. Moreover, mean and frequency have been calculated. The research findings demonstrate that openness to diversity and challenge enhances cross-cutting skills such as intercultural and communicative competence: through cultural immersion and the facility of speaking with locals, the participants have been able to develop their own Italian L2 language community. The goal is to share with the scientific community some insights to trace possible future lines of research.

Keywords: Italian as second language, language learning, learning community, openness to diversity

Procedia PDF Downloads 56
1157 Dynamic Analysis of Mono-Pile: Spectral Element Method

Authors: Rishab Das, Arnab Banerjee, Bappaditya Manna

Abstract:

Mono-pile foundations are often used in soft soils in order to support heavy mega-structures, whereby often these deep footings may undergo dynamic excitation due to many causes like earthquake, wind or wave loads acting on the superstructure, blasting, and unbalanced machines, etc. A comprehensive analytical study is performed to study the dynamics of the mono-pile system embedded in cohesion-less soil. The soil is considered homogeneous and visco-elastic in nature and is analytically modeled using complex springs. Considering the N number of the elements of the pile, the final global stiffness matrix is obtained by using the theories of the spectral element matrix method. Further, statically condensing the intermediate internal nodes of the global stiffness matrix results to a smaller sub matrix containing the nodes experiencing the external translation and rotation, and the stiffness and damping functions (impedance functions) of the embedded piles are determined. Proper plots showing the variation of the real and imaginary parts of these impedance functions with the dimensionless frequency parameter are obtained. The plots obtained from this study are validated by that provided by Novak,1974. Further, the dynamic analysis of the resonator impregnated pile is proposed within this study. Moreover, with the aid of Wood's 1g laboratory scaling law, a proper scaled-down resonator-pile model is 3D printed using PLA material. Dynamic analysis of the scaled model is carried out in the time domain, whereby the lateral loads are imposed on the pile head. The response obtained from the sensors through the LabView software is compared with the proposed theoretical data.

Keywords: mono-pile, visco-elastic, impedance, LabView

Procedia PDF Downloads 98
1156 On the Monitoring of Structures and Soils by Tromograph

Authors: Magarò Floriana, Zinno Raffaele

Abstract:

Since 2009, with the coming into force of the January 14, 2008 Ministerial Decree "New technical standards for construction", and the explanatory ministerial circular N°.617 of February 2, 2009, the question of seismic hazard and the design of seismic-resistant structures in Italy has acquired increasing importance. One of the most discussed aspects in recent Italian and international scientific literature concerns the dynamic interaction between land and structure, and the effects which dynamic coupling may have on individual buildings. In effect, from systems dynamics, it is well known that resonance can have catastrophic effects on a stimulated system, leading to a response that is not compatible with the previsions in the design phase. The method used in this study to estimate the frequency of oscillation of the structure is as follows: the analysis of HVSR (Horizontal to Vertical Spectral Ratio) relations. This allows for evaluation of very simple oscillation frequencies for land and structures. The tool used for data acquisition is an experimental digital tromograph. This is an engineered development of the experimental Languamply RE 4500 tromograph, equipped with an engineered amplification circuit and improved electronically using extremely small electronic components (size of each individual amplifier 16 x 26 mm). This tromograph is a modular system, completely "free" and "open", designed to interface Windows, Linux, OSX and Android with the outside world. It an amplifier designed to carry out microtremor measurements, yet which will also be useful for seismological and seismic measurements in general. The development of single amplifiers of small dimension allows for a very clean signal since being able to position it a few centimetres from the geophone eliminates cable “antenna” phenomena, which is a necessary characteristic in seeking to have signals which are clean at the very low voltages to be measured.

Keywords: microtremor, HVSR, tromograph, structural engineering

Procedia PDF Downloads 394
1155 Rheological Properties of Red Beet Root Juice Squeezed from Ultrasounicated Red Beet Root Slices

Authors: M. Çevik, S. Sabancı, D. Tezcan, C. Çelebi, F. İçier

Abstract:

Ultrasound technology is the one of the non-thermal food processing method in recent years which has been used widely in the food industry. Ultrasound application in the food industry is divided into two groups: low and high intensity ultrasound application. While low intensity ultrasound is used to obtain information about physicochemical properties of foods, high intensity ultrasound is used to extract bioactive components and to inactivate microorganisms and enzymes. In this study, the ultrasound pre-treatment at a constant power (1500 W) and fixed frequency (20 kHz) was applied to the red beetroot slices having the dimension of 25×25×50 mm at the constant temperature (25°C) for different application times (0, 5, 10, 15 and 20 min). The red beet root slices pretreated with ultrasonication was squeezed immediately. The changes on rheological properties of red beet root juice depending on ultrasonication duration applied to slices were investigated. Rheological measurements were conducted by using Brookfield viscometer (LVDV-II Pro, USA). Shear stress-shear rate data was obtained from experimental measurements for 0-200 rpm range by using spindle 18. Rheological properties of juice were determined by fitting this data to some rheological models (Newtonian, Bingham, Power Law, Herschel Bulkley). It was investigated that the best model was Power Law model for both untreated red beet root juice (R2=0.991, χ2=0.0007, RMSE=0.0247) and red beetroot juice produced from ultrasonicated slices (R2=0.993, χ2=0.0006, RMSE=0.0216 for 20 min pre-treatment). k (consistency coefficient) and n (flow behavior index) values of red beetroot juices were not affected from the duration of ultrasonication applied to the slices. Ultrasound treatment does not result in any changes on the rheological properties of red beetroot juice. This can be explained by lack of ability to homogenize of the intensity of applied ultrasound.

Keywords: ultrasonication, rheology, red beet root slice, juice

Procedia PDF Downloads 390
1154 The Benefit of a Universal Screening Program for Lipid Disorders in Two to Ten Years Old Lebanese Children

Authors: Nicolas Georges, Akiki Simon, Bassil Naim, Nawfal Georges, Abi Fares Georges

Abstract:

Introduction: Dyslipidemia has been recognized as a risk factor for cardiovascular diseases. While the development of atherosclerotic lesions begins in childhood and progresses throughout life, data on the prevalence of dyslipidemic children in Lebanon is lacking. Objectives: This study was conducted to assess the benefit of a protocol for universal screening for lipid disorder in Lebanese children aged between two and ten years old. Materials and Methods: A total of four hundred children aged 2 to 10 years old (51.5% boys) were included in the study. The subjects were recruited from private pediatric clinics after parental consent. Fasting total cholesterol (TC), triglycerides (TG), low-density lipoprotein (LDL), high-density lipoprotein (HDL) levels were measured and non-HDL cholesterol was calculated. The values were categorized according to 2011 Expert on Integrated Guidelines for Cardiovascular Health and Risk Reduction in Children and Adolescents. Results: The overall prevalence of high TC ( ≥ 200 mg/dL), high non-HDL-C ( ≥ 145 mg/dL), high LDL ( ≥ 130 mg/dL), high TG ( ≥ 100 mg/dL) and low HDL ( < 40 mg/dL) was respectively 19.5%, 23%, 19%, 31.8% and 20%. The overall frequency of dyslipidemia was 51.7%. In a bivariate analysis, dyslipidemia in children was associated with a BMI ≥ 95ᵗʰ percentile and parents having TC > 240 mg/dL with a P value respectively of 0.006 and 0.0001. Furthermore, high TG was independently associated with a BMI ≥ 95ᵗʰ percentile (P=0.0001). Children with parents having TC > 240 mg/dL was significantly correlated with high TC, high non-HDL-C and high LDL (P=0.0001 for all variables). Finally, according to the Pediatric dyslipidemia screening guidelines from the 2011 Expert Panel, 62.3% of dyslipidemic children had at least 1 risk factor that qualified them for screening while 37.7% of them didn’t have any risk factor. Conclusions: It is preferable to review the latest pediatric dyslipidemia screening guidelines by performing a universal screening program since a third of our dyslipidemic Lebanese children have been missed.

Keywords: cardiovascular risk factors, dyslipidemia, Lebanese children, screening

Procedia PDF Downloads 219
1153 Knowledge and Capabilities of Primary Caregivers in Providing Quality Care for Elderly Patients with Post- Operative Hip Fracture, Songklanagarind Hospital

Authors: Manee Hasap, Mongkolchai Hasap, Tasanee Nasae

Abstract:

The purpose of this study was to evaluate the primary caregivers’ knowledge and capabilities for providing quality care to be hospitalized post-hip fracture surgery elderly patients. The theoretical framework of the study was derived from the concepts of dependent care agency in Orem’s Self-Care theory, and family care provision for the elderly and chronically ill patients. 59 subjects were purposively selected. The subjects were primary caregivers of post-operated hip fracture elderly patients who had been admitted to the Orthopaedic Ward of Songklanagarind Hospital. Demographic data of the caregivers and patients were collected by non-participant observation using the evaluation and recording forms. The reliability of caregivers’ knowledge measurement (0.86) was obtained by KR-20 and that of caregivers’ capabilities for post-operative care evaluation form (0.97) obtained from 2 observers by interrater reliability. The data were analyzed using descriptive statistic, which were frequency, percentage, mean, and standard deviation. The result of this study indicated that elderly patients with post-hip fracture surgery had many pre-discharge self care limitations. Approximately, 75% of the caregivers had knowledge to respond to patient’s essential needs at a high level, while the rest (25%) had this knowledge a moderate level. For observation, 57.63% of the subjects had capabilities in care practice at a moderate level; 28.81% had capabilities in care practice at a high level, while 13.56% had at a low level. The result of this study can be used as basic information for patients and caregivers capabilities developing plan especially, providing patients’ activities, accident surveillance and complications prevention for a good life quality of elderly patients after hip surgery both hospitalization and rehabilitation at home.

Keywords: care givers’ knowledge, care givers’ capabilities, elderly hip fracture patients, patients

Procedia PDF Downloads 546
1152 Faster Pedestrian Recognition Using Deformable Part Models

Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia

Abstract:

Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.

Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time

Procedia PDF Downloads 265
1151 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.

Keywords: energy simulation, modelling calibration, occupant behavior, university building

Procedia PDF Downloads 129
1150 Ultrasonic Treatment of Baker’s Yeast Effluent

Authors: Emine Yılmaz, Serap Fındık

Abstract:

Baker’s yeast industry uses molasses as a raw material. Molasses is end product of sugar industry. Wastewater from molasses processing presents large amount of coloured substances that give dark brown color and high organic load to the effluents. The main coloured compounds are known as melanoidins. Melanoidins are product of Maillard reaction between amino acid and carbonyl groups in molasses. Dark colour prevents sunlight penetration and reduces photosynthetic activity and dissolved oxygen level of surface waters. Various methods like biological processes (aerobic and anaerobic), ozonation, wet air oxidation, coagulation/flocculation are used to treatment of baker’s yeast effluent. Before effluent is discharged adequate treatment is imperative. In addition to this, increasingly stringent environmental regulations are forcing distilleries to improve existing treatment and also to find alternative methods of effluent management or combination of treatment methods. Sonochemical oxidation is one of the alternative methods. Sonochemical oxidation employs ultrasound resulting in cavitation phenomena. In this study, decolorization of baker’s yeast effluent was investigated by using ultrasound. Baker’s yeast effluent was supplied from a factory which is located in the north of Turkey. An ultrasonic homogenizator used for this study. Its operating frequency is 20 kHz. TiO2-ZnO catalyst has been used as sonocatalyst. The effects of molar proportion of TiO2-ZnO, calcination temperature and time, catalyst amount were investigated on the decolorization of baker’s yeast effluent. The results showed that prepared composite TiO2-ZnO with 4:1 molar proportion treated at 700°C for 90 min provides better result. Initial decolorization rate at 15 min is 3% without catalyst, 14,5% with catalyst treated at 700°C for 90 min respectively.

Keywords: baker’s yeast effluent, decolorization, sonocatalyst, ultrasound

Procedia PDF Downloads 454
1149 A Review on the Level of Development of Macedonia and Iran's Organic Agriculture as Compared to Nigeria

Authors: Yusuf Ahmad Sani, Adamu Alhaji Yakubu, Alhaji Abdullahi Jamilu, Joel Omeke, Ibrahim Jumare Sambo

Abstract:

With the rising global threat of food security, cancer, and related diseases (carcinogenic) because of increased usage of inorganic substances in agricultural food production, the Ministry of Food Agriculture and Livestock of the Republic of Turkey organized an International Workshop on Organic Agriculture between 8 – 12th December 2014 at the International Agricultural Research and Training Center, Izmir. About 21 countries, including Nigeria, were invited to attend the training workshop. Several topics on organic agriculture were presented by renowned scholars, ranging from regulation, certification, crop, animal, seed production, pest and disease management, soil composting, and marketing of organic agricultural products, among others. This paper purposely selected two countries (Macedonia and Iran) out of the 21 countries to assess their level of development in terms of organic agriculture as compared to Nigeria. Macedonia, with a population of only 2.1 million people as of 2014, started organic agriculture in 2005 with only 266ha of land and has grown significantly to over 5,000ha in 2010, covering such crops as cereals (62%), forage (20%) fruit orchard (7%), vineyards (5%), vegetables (4%), oil seed and industrial crops (1%) each. Others are organic beekeeping from 110 hives to over 15,000 certified colonies. As part of government commitment, the level of government subsidy for organic products was 30% compared to the direct support for conventional agricultural products. About 19 by-laws were introduced on organic agricultural production that was fully consistent with European Union regulations. The republic of Iran, on the other hand, embarked on organic agriculture for the fact that the country recorded the highest rate of cancer disease in the world, with over 30,000 people dying every year and 297 people diagnosed every day. However, the host country, Turkey, is well advanced in organic agricultural production and now being the largest exporter of organic products to Europe and other parts of the globe. A technical trip to one of the villages that are under the government scheme on organic agriculture reveals that organic agriculture was based on market-demand-driven and the support of the government was very visible, linking the farmers with private companies that provide inputs to them while the companies purchase the products at harvest with high premium price. However, in Nigeria, research on organic agriculture was very recent, and there was very scanty information on organic agriculture due to poor documentation and very low awareness, even among the elites. The paper, therefore, recommends that the government should provide funds to NARIs to conduct research on organic agriculture and to establish clear government policy and good pre-conditions for sustainable organic agricultural production in the country.

Keywords: organic agriculture, food security, food safety, food nutrition

Procedia PDF Downloads 16
1148 Changes on Some Physical and Chemical Properties of Red Beetroot Juice during Ultrasound Pretreatment

Authors: Serdal Sabanci, Mutlu Çevik, Derya Tezcan, Cansu Çelebi, Filiz Içier

Abstract:

Ultrasound is defined as sound waves having frequencies higher than 20 kHz, which is greater than the limits of the human hearing range. In recent years, ultrasonic treatment is an emerging technology being used increasingly in the food industry. It is applied as an alternative technique for different purposes such as microbial and enzyme inactivation, extraction, drying, filtration, crystallization, degas, cutting etc. Red beetroot (Beta vulgaris L.) is a root vegetable which is rich in mineral components, folic acid, dietary fiber, anthocyanin pigments. In this study, the application of low frequency high intensity ultrasound to the red beetroot slices and red beetroot juice for different treatment times (0, 5, 10, 15, 20 min) was investigated. Ultrasonicated red beetroot slices were also squeezed immediately. Changes on colour, betanin, pH and titratable acidity properties of red beetroot juices (the ultrasonicated juice (UJ) and the juice from ultrasonicated slices (JUS)) were determined. Although there was no significant difference statistically in the changes of color value of JUS samples due to ultrasound application (p>0.05), the color properties of UJ samples ultrasonicated for low durations were statistically different from raw material (p<0.05). The difference between color values of UJ and raw material disappeared (p>0.05) as the ultrasonication duration increased. The application of ultrasound to red beet root slices adversely affected and decreased the betanin content of JUS samples. On the other hand, the betanin content of UJ samples increased as the ultrasonication duration increased. Ultrasound treatment did not affect pH and titratable acidity of red beetroot juices statistically (p>0.05). The results suggest that ultrasound technology is the simple and economical technique which may successfully be employed for the processing of red beetroot juice with improved color and betanin quality. However, further investigation is still needed to confirm this.

Keywords: red beetroot, ultrasound, color, betanin

Procedia PDF Downloads 384
1147 Visual Improvement Outcome of Pars Plana Vitrectomy Combined Endofragmentation and Secondary IOL Implantation for Dropped Nucleus After Cataract Surgery : A Case Report

Authors: Saut Samuel Simamora

Abstract:

PURPOSE: Nucleus drop is one of the most feared and severe complications of modern cataract surgery. The lens material may drop through iatrogenic breaks of the posterior capsule. The incidence of the nucleus as the complication of phacoemulsification increases concomitant to the increased frequency of phacoemulsification. Pars plana vitrectomy (PPV) followed by endofragmentation and secondary intraocular lens (IOL) implantation is the choice of management procedure. This case report aims to present the outcome of PPV for the treatment dropped nucleus after cataract surgery METHODS: A 65 year old female patient came to Vitreoretina department with chief complaints blurry vision in her left eye after phacoemulsification one month before. Ophthalmological examination revealed visual acuity of the right eye (VA RE) was 6/15, and the left eye (VA LE) was hand movement. The intraocular pressure (IOP) on the right eye was 18 mmHg, and on the left eye was 59 mmHg. On her left eye, there were aphakic, dropped lens nucleus and secondary glaucoma.RESULTS: The patient got antiglaucoma agent until her IOP was decreased. She underwent pars plana vitrectomy to remove dropped nucleus and iris fixated IOL. One week post operative evaluation revealed VA LE was 6/7.5 and iris fixated IOL in proper position. CONCLUSIONS: Nucleus drop generally occurs in phacoemulsification cataract surgery techniques. Retained lens nucleus or fragments in the vitreous may cause severe intraocular inflammation leading to secondary glaucoma. The proper and good management for retained lens fragments in nucleus drop give excellent outcome to patient.

Keywords: secondary glaucoma, complication of phacoemulsification, nucleus drop, pars plana vitrectomy

Procedia PDF Downloads 63
1146 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks

Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang

Abstract:

The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.

Keywords: femtocell networks, game theory, interference mitigation, spectrum allocation

Procedia PDF Downloads 143
1145 Structural Model on Organizational Climate, Leadership Behavior and Organizational Commitment: Work Engagement of Private Secondary School Teachers in Davao City

Authors: Genevaive Melendres

Abstract:

School administrators face the reality of teachers losing their engagement, or schools losing the teachers. This study is then conducted to identify a structural model that best predict work engagement of private secondary teachers in Davao City. Ninety-three teachers from four sectarian schools and 56 teachers from four non-sectarian schools were involved in the completion of four survey instruments namely Organizational Climate Questionnaire, Leader Behavior Descriptive Questionnaire, Organizational Commitment Scales, and Utrecht Work Engagement Scales. Data were analyzed using frequency distribution, mean, standardized deviation, t-test for independent sample, Pearson r, stepwise multiple regression analysis, and structural equation modeling. Results show that schools have high level of organizational climate dimensions; leaders oftentimes show work-oriented and people-oriented behavior; teachers have high normative commitment and they are very often engaged at their work. Teachers from non-sectarian schools have higher organizational commitment than those from sectarian schools. Organizational climate and leadership behavior are positively related to and predict work engagement whereas commitment did not show any relationship. This study underscores the relative effects of three variables on the work engagement of teachers. After testing network of relationships and evaluating several models, a best-fitting model was found between leadership behavior and work engagement. The noteworthy findings suggest that principals pay attention and consistently evaluate their behavior for this best predicts the work engagement of the teachers. The study provides value to administrators who take decisions and create conditions in which teachers derive fulfillment.

Keywords: leadership behavior, organizational climate, organizational commitment, private secondary school teachers, structural model on work engagement

Procedia PDF Downloads 247
1144 Life Cycle Assessment Applied to Supermarket Refrigeration System: Effects of Location and Choice of Architecture

Authors: Yasmine Salehy, Yann Leroy, Francois Cluzel, Hong-Minh Hoang, Laurence Fournaison, Anthony Delahaye, Bernard Yannou

Abstract:

Taking into consideration all the life cycle of a product is now an important step in the eco-design of a product or a technology. Life cycle assessment (LCA) is a standard tool to evaluate the environmental impacts of a system or a process. Despite the improvement in refrigerant regulation through protocols, the environmental damage of refrigeration systems remains important and needs to be improved. In this paper, the environmental impacts of refrigeration systems in a typical supermarket are compared using the LCA methodology under different conditions. The system is used to provide cold at two levels of temperature: medium and low temperature during a life period of 15 years. The most commonly used architectures of supermarket cold production systems are investigated: centralized direct expansion systems and indirect systems using a secondary loop to transport the cold. The variation of power needed during seasonal changes and during the daily opening/closure periods of the supermarket are considered. R134a as the primary refrigerant fluid and two types of secondary fluids are considered. The composition of each system and the leakage rate of the refrigerant through its life cycle are taken from the literature and industrial data. Twelve scenarios are examined. They are based on the variation of three parameters, 1. location: France (Paris), Spain (Toledo) and Sweden (Stockholm), 2. different sources of electric consumption: photovoltaic panels and low voltage electric network and 3. architecture: direct and indirect refrigeration systems. OpenLCA, SimaPro softwares, and different impact assessment methods were compared; CML method is used to evaluate the midpoint environmental indicators. This study highlights the significant contribution of electric consumption in environmental damages compared to the impacts of refrigerant leakage. The secondary loop allows lowering the refrigerant amount in the primary loop which results in a decrease in the climate change indicators compared to the centralized direct systems. However, an exhaustive cost evaluation (CAPEX and OPEX) of both systems shows more important costs related to the indirect systems. A significant difference between the countries has been noticed, mostly due to the difference in electric production. In Spain, using photovoltaic panels helps to reduce efficiently the environmental impacts and the related costs. This scenario is the best alternative compared to the other scenarios. Sweden is a country with less environmental impacts. For both France and Sweden, the use of photovoltaic panels does not bring a significant difference, due to a less sunlight exposition than in Spain. Alternative solutions exist to reduce the impact of refrigerating systems, and a brief introduction is presented.

Keywords: eco-design, industrial engineering, LCA, refrigeration system

Procedia PDF Downloads 165