Search results for: real time kinematics
351 Development of Adaptive Proportional-Integral-Derivative Feeding Mechanism for Robotic Additive Manufacturing System
Authors: Andy Alubaidy
Abstract:
In this work, a robotic additive manufacturing system (RAMS) that is capable of three-dimensional (3D) printing in six degrees of freedom (DOF) with very high accuracy and virtually on any surface has been designed and built. One of the major shortcomings in existing 3D printer technology is the limitation to three DOF, which results in prolonged fabrication time. Depending on the techniques used, it usually takes at least two hours to print small objects and several hours for larger objects. Another drawback is the size of the printed objects, which is constrained by the physical dimensions of most low-cost 3D printers, which are typically small. In such cases, large objects are produced by dividing them into smaller components that fit the printer’s workable area. They are then glued, bonded or otherwise attached to create the required object. Another shortcoming is material constraints and the need to fabricate a single part using different materials. With the flexibility of a six-DOF robot, the RAMS has been designed to overcome these problems. A feeding mechanism using an adaptive Proportional-Integral-Derivative (PID) controller is utilized along with a national instrument compactRIO (NI cRIO), an ABB robot, and off-the-shelf sensors. The RAMS have the ability to 3D print virtually anywhere in six degrees of freedom with very high accuracy. It is equipped with an ABB IRB 120 robot to achieve this level of accuracy. In order to convert computer-aided design (CAD) files to digital format that is acceptable to the robot, Hypertherm Robotic Software Inc.’s state-of-the-art slicing software called “ADDMAN” is used. ADDMAN is capable of converting any CAD file into RAPID code (the programing language for ABB robots). The robot uses the generated code to perform the 3D printing. To control the entire process, National Instrument (NI) compactRIO (cRio 9074), is connected and communicated with the robot and a feeding mechanism that is designed and fabricated. The feeding mechanism consists of two major parts, cold-end and hot-end. The cold-end consists of what is conventionally known as an extruder. Typically, a stepper-motor is used to control the push on the material, however, for optimum control, a DC motor is used instead. The hot-end consists of a melt-zone, nozzle, and heat-brake. The melt zone ensures a thorough melting effect and consistent output from the nozzle. Nozzles are made of brass for thermo-conductivity while the melt-zone is comprised of a heating block and a ceramic heating cartridge to transfer heat to the block. The heat-brake ensures that there is no heat creep-up effect as this would swell the material and prevent consistent extrusion. A control system embedded in the cRio is developed using NI Labview which utilizes adaptive PID to govern the heating cartridge in conjunction with a thermistor. The thermistor sends temperature feedback to the cRio, which will issue heat increase or decrease based on the system output. Since different materials have different melting points, our system will allow us to adjust the temperature and vary the material.Keywords: robotic, additive manufacturing, PID controller, cRIO, 3D printing
Procedia PDF Downloads 218350 Tailoring Structural, Thermal and Luminescent Properties of Solid-State MIL-53(Al) MOF via Fe³⁺ Cation Exchange
Authors: T. Ul Rehman, S. Agnello, F. M. Gelardi, M. M. Calvino, G. Lazzara, G. Buscarino, M. Cannas
Abstract:
Metal-Organic Frameworks (MOFs) have emerged as promising candidates for detecting metal ions owing to their large surface area, customizable porosity, and diverse functionalities. In recent years, there has been a surge in research focused on MOFs with luminescent properties. These frameworks are constructed through coordinated bonding between metal ions and multi-dentate ligands, resulting in inherent fluorescent structures. Their luminescent behavior is influenced by factors like structural composition, surface morphology, pore volume, and interactions with target analytes, particularly metal ions. MOFs exhibit various sensing mechanisms, including photo-induced electron transfer (PET) and charge transfer processes such as ligand-to-metal (LMCT) and metal-to-ligand (MLCT) transitions. Among these, MIL-53(Al) stands out due to its flexibility, stability, and specific affinity towards certain metal ions, making it a promising platform for selective metal ion sensing. This study investigates the structural, thermal, and luminescent properties of MIL-53(Al) metal-organic framework (MOF) upon Fe3+ cation exchange. Two separate sets of samples were prepared to activate the MOF powder at different temperatures. The first set of samples, referred to as MIL-53(Al), activated (120°C), was prepared by activating the raw powder in a glass tube at 120°C for 12 hours and then sealing it. The second set of samples, referred to as MIL-53(Al), activated (300°C), was prepared by activating the MIL-53(Al) powder in a glass tube at 300°C for 70 hours. Additionally, 25 mg of MIL-53(Al) powder was dispersed in 5 mL of Fe3+ solution at various concentrations (0.1-100 mM) for the cation exchange experiment. The suspension was centrifuged for five minutes at 10,000 rpm to extract MIL-53(Al) powder. After three rounds of washing with ultrapure water, MIL-53(Al) powder was heated at 120°C for 12 hours. For PXRD and TGA analyses, a sample of the obtained MIL-53(Al) was used. We also activated the cation-exchanged samples for time-resolved photoluminescence (TRPL) measurements at two distinct temperatures (120 and 300°C) for comparative analysis. Powder X-ray diffraction patterns reveal amorphization in samples with higher Fe3+ concentrations, attributed to alterations in coordination environments and ion exchange dynamics. Thermal decomposition analysis shows reduced weight loss in Fe3+-exchanged MOFs, indicating enhanced stability due to stronger metal-ligand bonds and altered decomposition pathways. Raman spectroscopy demonstrates intensity decrease, shape disruption, and frequency shifts, indicative of structural perturbations induced by cation exchange. Photoluminescence spectra exhibit ligand-based emission (π-π* or n-π*) and ligand-to-metal charge transfer (LMCT), influenced by activation temperature and Fe3+ incorporation. Quenching of luminescence intensity and shorter lifetimes upon Fe3+ exchange result from structural distortions and Fe3+ binding to organic linkers. In a nutshell, this research underscores the complex interplay between composition, structure, and properties in MOFs, offering insights into their potential for diverse applications in catalysis, gas storage, and luminescent devices.Keywords: Fe³⁺ cation exchange, luminescent metal-organic frameworks (LMOFs), MIL-53(Al), solid-state analysis
Procedia PDF Downloads 66349 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers
Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala
Abstract:
The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification
Procedia PDF Downloads 164348 Feasibility of Washing/Extraction Treatment for the Remediation of Deep-Sea Mining Trailings
Authors: Kyoungrean Kim
Abstract:
Importance of deep-sea mineral resources is dramatically increasing due to the depletion of land mineral resources corresponding to increasing human’s economic activities. Korea has acquired exclusive exploration licenses at four areas which are the Clarion-Clipperton Fracture Zone in the Pacific Ocean (2002), Tonga (2008), Fiji (2011) and Indian Ocean (2014). The preparation for commercial mining of Nautilus minerals (Canada) and Lockheed martin minerals (USA) is expected by 2020. The London Protocol 1996 (LP) under International Maritime Organization (IMO) and International Seabed Authority (ISA) will set environmental guidelines for deep-sea mining until 2020, to protect marine environment. In this research, the applicability of washing/extraction treatment for the remediation of deep-sea mining tailings was mainly evaluated in order to present preliminary data to develop practical remediation technology in near future. Polymetallic nodule samples were collected at the Clarion-Clipperton Fracture Zone in the Pacific Ocean, then stored at room temperature. Samples were pulverized by using jaw crusher and ball mill then, classified into 3 particle sizes (> 63 µm, 63-20 µm, < 20 µm) by using vibratory sieve shakers (Analysette 3 Pro, Fritsch, Germany) with 63 µm and 20 µm sieve. Only the particle size 63-20 µm was used as the samples for investigation considering the lower limit of ore dressing process which is tens to 100 µm. Rhamnolipid and sodium alginate as biosurfactant and aluminum sulfate which are mainly used as flocculant were used as environmentally friendly additives. Samples were adjusted to 2% liquid with deionized water then mixed with various concentrations of additives. The mixture was stirred with a magnetic bar during specific reaction times and then the liquid phase was separated by a centrifugal separator (Thermo Fisher Scientific, USA) under 4,000 rpm for 1 h. The separated liquid was filtered with a syringe and acrylic-based filter (0.45 µm). The extracted heavy metals in the filtered liquid were then determined using a UV-Vis spectrometer (DR-5000, Hach, USA) and a heat block (DBR 200, Hach, USA) followed by US EPA methods (8506, 8009, 10217 and 10220). Polymetallic nodule was mainly composed of manganese (27%), iron (8%), nickel (1.4%), cupper (1.3 %), cobalt (1.3%) and molybdenum (0.04%). Based on remediation standards of various countries, Nickel (Ni), Copper (Cu), Cadmium (Cd) and Zinc (Zn) were selected as primary target materials. Throughout this research, the use of rhamnolipid was shown to be an effective approach for removing heavy metals in samples originated from manganese nodules. Sodium alginate might also be one of the effective additives for the remediation of deep-sea mining tailings such as polymetallic nodules. Compare to the use of rhamnolipid and sodium alginate, aluminum sulfate was more effective additive at short reaction time within 4 h. Based on these results, sequencing particle separation, selective extraction/washing, advanced filtration of liquid phase, water treatment without dewatering and solidification/stabilization may be considered as candidate technologies for the remediation of deep-sea mining tailings.Keywords: deep-sea mining tailings, heavy metals, remediation, extraction, additives
Procedia PDF Downloads 155347 Numerical Analysis of the Computational Fluid Dynamics of Co-Digestion in a Large-Scale Continuous Stirred Tank Reactor
Authors: Sylvana A. Vega, Cesar E. Huilinir, Carlos J. Gonzalez
Abstract:
Co-digestion in anaerobic biodigesters is a technology improving hydrolysis by increasing methane generation. In the present study, the dimensional computational fluid dynamics (CFD) is numerically analyzed using Ansys Fluent software for agitation in a full-scale Continuous Stirred Tank Reactor (CSTR) biodigester during the co-digestion process. For this, a rheological study of the substrate is carried out, establishing rotation speeds of the stirrers depending on the microbial activity and energy ranges. The substrate is organic waste from industrial sources of sanitary water, butcher, fishmonger, and dairy. Once the rheological behavior curves have been obtained, it is obtained that it is a non-Newtonian fluid of the pseudoplastic type, with a solids rate of 12%. In the simulation, the rheological results of the fluid are considered, and the full-scale CSTR biodigester is modeled. It was coupling the second-order continuity differential equations, the three-dimensional Navier Stokes, the power-law model for non-Newtonian fluids, and three turbulence models: k-ε RNG, k-ε Realizable, and RMS (Reynolds Stress Model), for a 45° tilt vane impeller. It is simulated for three minutes since it is desired to study an intermittent mixture with a saving benefit of energy consumed. The results show that the absolute errors of the power number associated with the k-ε RNG, k-ε Realizable, and RMS models were 7.62%, 1.85%, and 5.05%, respectively, the numbers of power obtained from the analytical-experimental equation of Nagata. The results of the generalized Reynolds number show that the fluid dynamics have a transition-turbulent flow regime. Concerning the Froude number, the result indicates there is no need to implement baffles in the biodigester design, and the power number provides a steady trend close to 1.5. It is observed that the levels of design speeds within the biodigester are approximately 0.1 m/s, which are speeds suitable for the microbial community, where they can coexist and feed on the substrate in co-digestion. It is concluded that the model that more accurately predicts the behavior of fluid dynamics within the reactor is the k-ε Realizable model. The flow paths obtained are consistent with what is stated in the referenced literature, where the 45° inclination PBT impeller is the right type of agitator to keep particles in suspension and, in turn, increase the dispersion of gas in the liquid phase. If a 24/7 complete mix is considered under stirred agitation, with a plant factor of 80%, 51,840 kWh/year are estimated. On the contrary, if intermittent agitations of 3 min every 15 min are used under the same design conditions, reduce almost 80% of energy costs. It is a feasible solution to predict the energy expenditure of an anaerobic biodigester CSTR. It is recommended to use high mixing intensities, at the beginning and end of the joint phase acetogenesis/methanogenesis. This high intensity of mixing, in the beginning, produces the activation of the bacteria, and once reaching the end of the Hydraulic Retention Time period, it produces another increase in the mixing agitations, favoring the final dispersion of the biogas that may be trapped in the biodigester bottom.Keywords: anaerobic co-digestion, computational fluid dynamics, CFD, net power, organic waste
Procedia PDF Downloads 115346 Flexible Ethylene-Propylene Copolymer Nanofibers Decorated with Ag Nanoparticles as Effective 3D Surface-Enhanced Raman Scattering Substrates
Authors: Yi Li, Rui Lu, Lianjun Wang
Abstract:
With the rapid development of chemical industry, the consumption of volatile organic compounds (VOCs) has increased extensively. In the process of VOCs production and application, plenty of them have been transferred to environment. As a result, it has led to pollution problems not only in soil and ground water but also to human beings. Thus, it is important to develop a sensitive and cost-effective analytical method for trace VOCs detection in environment. Surface-enhanced Raman Spectroscopy (SERS), as one of the most sensitive optical analytical technique with rapid response, pinpoint accuracy and noninvasive detection, has been widely used for ultratrace analysis. Based on the plasmon resonance on the nanoscale metallic surface, SERS technology can even detect single molecule due to abundant nanogaps (i.e. 'hot spots') on the nanosubstrate. In this work, a self-supported flexible silver nitrate (AgNO3)/ethylene-propylene copolymer (EPM) hybrid nanofibers was fabricated by electrospinning. After an in-situ chemical reduction using ice-cold sodium borohydride as reduction agent, numerous silver nanoparticles were formed on the nanofiber surface. By adjusting the reduction time and AgNO3 content, the morphology and dimension of silver nanoparticles could be controlled. According to the principles of solid-phase extraction, the hydrophobic substance is more likely to partition into the hydrophobic EPM membrane in an aqueous environment while water and other polar components are excluded from the analytes. By the enrichment of EPM fibers, the number of hydrophobic molecules located on the 'hot spots' generated from criss-crossed nanofibers is greatly increased, which further enhances SERS signal intensity. The as-prepared Ag/EPM hybrid nanofibers were first employed to detect common SERS probe molecule (p-aminothiophenol) with the detection limit down to 10-12 M, which demonstrated an excellent SERS performance. To further study the application of the fabricated substrate for monitoring hydrophobic substance in water, several typical VOCs, such as benzene, toluene and p-xylene, were selected as model compounds. The results showed that the characteristic peaks of these target analytes in the mixed aqueous solution could be distinguished even at a concentration of 10-6 M after multi-peaks gaussian fitting process, including C-H bending (850 cm-1), C-C ring stretching (1581 cm-1, 1600 cm-1) of benzene, C-H bending (844 cm-1 ,1151 cm-1), C-C ring stretching (1001 cm-1), CH3 bending vibration (1377 cm-1) of toluene, C-H bending (829 cm-1), C-C stretching (1614 cm-1) of p-xylene. The SERS substrate has remarkable advantages which combine the enrichment capacity from EPM and the Raman enhancement of Ag nanoparticles. Meanwhile, the huge specific surface area resulted from electrospinning is benificial to increase the number of adsoption sites and promotes 'hot spots' formation. In summary, this work provides powerful potential in rapid, on-site and accurate detection of trace VOCs using a portable Raman.Keywords: electrospinning, ethylene-propylene copolymer, silver nanoparticles, SERS, VOCs
Procedia PDF Downloads 161345 Conservation Challenges of Fish and Fisheries in Lake Tana, Ethiopia
Authors: Shewit Kidane, Abebe Getahun, Wassie Anteneh, Admassu Demeke, Peter Goethals
Abstract:
We have reviewed major findings of scientific studies on Lake Tana fish resources and their threats. The aim was to provide summarized information for all concerned bodies and international readers to get full and comprehensive picture about the lake’s fish resource and conservation problems. The Lake Tana watershed comprise 28 fish species, of which 21 are endemic. Moreover, Lake Tana is the one among the top 250 lake regions of global importance for biodiversity and it is world recognized migratory birds wintering site. Lake Tana together with its adjacent wetlands provide directly and indirectly a livelihood for more than 500,000 people. However, owing to anthropogenic activities, the lake ecosystem as well as fish and attributes of the fisheries sector are severely degraded. Fish species in Lake Tana are suffering due to illegal fishing, damming, habitat/breeding ground degradation, wastewater disposal, introduction of exotic species, and lack of implementing fisheries regulations. Currently, more than 98% of fishers in Lake Tana are using the most destructive monofilament. Indeed, dams, irrigation schemes and hydropower are constructed in response to the emerging development need only. Mitigation techniques such as construction of fish ladders for the migratory fishes are the most forgotten. In addition, water resource developers are likely unaware of both the importance of the fisheries and the impact of dam construction on fish. As a result, the biodiversity issue is often missed. Besides, Lake Tana wetlands, which play vital role to sustain biodiversity, are not wisely utilised in the sense of the Ramsar Convention’s definition. Wetlands are considered as unhealthy and hence wetland conversion for the purpose of recession agriculture is still seen as advanced mode of development. As a result, many wetlands in the lake watershed are shrinking drastically over time and Cyprus papyrus, one of the characteristic features of Lake Tana, has dramatically declined in its distribution with some local extinction. Furthermore, the recently introduced water hyacinth (Eichhornia crassipes) is creating immense problems on the lake ecosystem. Moreover, currently, 1.56 million tons of sediment have deposited into the lake each year and wastes from the industries and residents are directly discharged into the lake without treatment. Recently, sign of eutrophication is revealed in Lake Tana and most coarsely, the incidence of cyanobacteria genus Microcystis was reported from the Bahir Dar Gulf of Lake Tana. Thus, the direct dependency of the communities on the lake water for drinking as well as to wash their body and clothes and its fisheries make the problem worst. Indeed, since it is home to many endemic migratory fish, such kind of unregulated developmental activities could be detrimental to their stocks. This can be best illustrated by the drastic stock reduction (>75% in biomass) of the world unique Labeobarbus species. So, unless proper management is put in place, the anthropogenic impacts can jeopardize the aquatic ecosystems. Therefore, in order to sustainably use the aquatic resources and fulfil the needs of the local people, every developmental activity and resource utilization should be carried out adhering to the available policies.Keywords: anthropogenic impacts, dams, endemic fish, wetland degradation
Procedia PDF Downloads 252344 Creation of a Test Machine for the Scientific Investigation of Chain Shot
Authors: Mark McGuire, Eric Shannon, John Parmigiani
Abstract:
Timber harvesting increasingly involves mechanized equipment. This has increased the efficiency of harvesting, but has also introduced worker-safety concerns. One such concern arises from the use of harvesters. During operation, harvesters subject saw chain to large dynamic mechanical stresses. These stresses can, under certain conditions, cause the saw chain to fracture. The high speed of harvester saw chain can cause the resulting open chain loop to fracture a second time due to the dynamic loads placed upon it as it travels through space. If a second fracture occurs, it can result in a projectile consisting of one-to-several chain links. This projectile is referred to as a chain shot. It has speeds similar to a bullet but typically has greater mass and is a significant safety concern. Numerous examples exist of chain shots penetrating bullet-proof barriers and causing severe injury and death. Improved harvester-cab barriers can help prevent injury however a comprehensive scientific understanding of chain shot is required to consistently reduce or prevent it. Obtaining this understanding requires a test machine with the capability to cause chain shot to occur under carefully controlled conditions and accurately measure the response. Worldwide few such test machine exist. Those that do focus on validating the ability of barriers to withstand a chain shot impact rather than obtaining a scientific understanding of the chain shot event itself. The purpose of this paper is to describe the design, fabrication, and use of a test machine capable of a comprehensive scientific investigation of chain shot. The capabilities of this machine are to test all commercially-available saw chains and bars at chain tensions and speeds meeting and exceeding those typically encountered in harvester use and accurately measure the corresponding key technical parameters. The test machine was constructed inside of a standard shipping container. This provides space for both an operator station and a test chamber. In order to contain the chain shot under any possible test conditions, the test chamber was lined with a base layer of AR500 steel followed by an overlay of HDPE. To accommodate varying bar orientations and fracture-initiation sites, the entire saw chain drive unit and bar mounting system is modular and capable of being located anywhere in the test chamber. The drive unit consists of a high-speed electric motor with a flywheel. Standard Ponsse harvester head components are used to bar mounting and chain tensioning. Chain lubrication is provided by a separate peristaltic pump. Chain fracture is initiated through ISO standard 11837. Measure parameters include shaft speed, motor vibration, bearing temperatures, motor temperature, motor current draw, hydraulic fluid pressure, chain force at fracture, and high-speed camera images. Results show that the machine is capable of consistently causing chain shot. Measurement output shows fracture location and the force associated with fracture as a function of saw chain speed and tension. Use of this machine will result in a scientific understanding of chain shot and consequently improved products and greater harvester operator safety.Keywords: chain shot, safety, testing, timber harvesters
Procedia PDF Downloads 152343 Songwriting in the Postdigital Age: Using TikTok and Instagram as Online Informal Learning Technologies
Authors: Matthias Haenisch, Marc Godau, Julia Barreiro, Dominik Maxelon
Abstract:
In times of ubiquitous digitalization and the increasing entanglement of humans and technologies in musical practices in the 21st century, it is to be asked, how popular musicians learn in the (post)digital Age. Against the backdrop of the increasing interest in transferring informal learning practices into formal settings of music education the interdisciplinary research association »MusCoDA – Musical Communities in the (Post)Digital Age« (University of Erfurt/University of Applied Sciences Clara Hoffbauer Potsdam, funded by the German Ministry of Education and Research, pursues the goal to derive an empirical model of collective songwriting practices from the study of informal lelearningf songwriters and bands that can be translated into pedagogical concepts for music education in schools. Drawing on concepts from Community of Musical Practice and Actor Network Theory, lelearnings considered not only as social practice and as participation in online and offline communities, but also as an effect of heterogeneous networks composed of human and non-human actors. Learning is not seen as an individual, cognitive process, but as the formation and transformation of actor networks, i.e., as a practice of assembling and mediating humans and technologies. Based on video stimulated recall interviews and videography of online and offline activities, songwriting practices are followed from the initial idea to different forms of performance and distribution. The data evaluation combines coding and mapping methods of Grounded Theory Methodology and Situational Analysis. This results in network maps in which both the temporality of creative practices and the material and spatial relations of human and technological actors are reconstructed. In addition, positional analyses document the power relations between the participants that structure the learning process of the field. In the area of online informal lelearninginitial key research findings reveal a transformation of the learning subject through the specific technological affordances of TikTok and Instagram and the accompanying changes in the learning practices of the corresponding online communities. Learning is explicitly shaped by the material agency of online tools and features and the social practices entangled with these technologies. Thus, any human online community member can be invited to directly intervene in creative decisions that contribute to the further compositional and structural development of songs. At the same time, participants can provide each other with intimate insights into songwriting processes in progress and have the opportunity to perform together with strangers and idols. Online Lelearnings characterized by an increase in social proximity, distribution of creative agency and informational exchange between participants. While it seems obvious that traditional notions not only of lelearningut also of the learning subject cannot be maintained, the question arises, how exactly the observed informal learning practices and the subject that emerges from the use of social media as online learning technologies can be transferred into contexts of formal learningKeywords: informal learning, postdigitality, songwriting, actor-network theory, community of musical practice, social media, TikTok, Instagram, apps
Procedia PDF Downloads 128342 Health Advocacy in Medical School: An American Survey on Attitudes and Engagement in Clerkships
Authors: Rachel S. Chang, Samuel P. Massion, Alan Z. Grusky, Heather A. Ridinger
Abstract:
Introduction Health advocacy is defined as activities that improve access to care, utilize resources, address health disparities, and influence health policy. Advocacy is increasingly being recognized as a critical component of a physician’s role, as understanding social determinants of health and improving patient care are important aspects within the American Medical Association’s Health Systems Science framework. However, despite this growing prominence, educational interventions that address advocacy topics are limited and variable across medical school curricula. Furthermore, few recent studies have evaluated attitudes toward health advocacy among physicians-in-training in the United States. This study examines medical student attitudes towards health advocacy, along with perceived knowledge, ability, and current level of engagement with health advocacy during their clerkships. Methods This study employed a cross-sectional survey design using a single anonymous, self-report questionnaire to all second-year medical students at Vanderbilt University School of Medicine (n=96) in December 2020 during clerkship rotations. The survey had 27 items with 5-point Likert scale (15), multiple choice (11), and free response questions (1). Descriptive statistics and thematic analysis were utilized to analyze responses. The study was approved by the Vanderbilt University Institutional Review Board. Results There was an 88% response rate among second-year clerkship medical students. A majority (83%) agreed that formal training in health advocacy should be a mandatory part of the medical student curriculum Likewise, 83% of respondents felt that acting as a health advocate or patients should be part of their role as a clerkship student. However, a minority (25%) felt adequately prepared. While 72% of respondents felt able to identify a psychosocial need, 18% felt confident navigating the healthcare system and only 9% felt able to connect a patient to a psychosocial resource to fill that gap. 44% of respondents regularly contributed to conversations with their medical teams when discussing patients’ social needs, such as housing insecurity, financial insecurity, or legal needs. On average, respondents reported successfully connecting patients to psychosocial resources 1-2 times per 8-week clerkship block. Barriers to participating in health advocacy included perceived time constraints, lack of awareness of resources, lower emphasis among medical teams, and scarce involvement with social work teams. Conclusions In this single-institutional study, second-year medical students on clerkships recognize the importance of advocating for patients and support advocacy training within their medical school curriculum. However, their perceived lack of ability to navigate the healthcare system and connect patients to psychosocial resources, result in students feeling unprepared to advocate as effectively as they hoped during their clerkship rotations. Our results support the ongoing need to equip medical students with training and resources necessary for them to effectively act as advocates for patients.Keywords: clerkships, medical students, patient advocacy, social medicine
Procedia PDF Downloads 130341 Rupture Termination of the 1950 C. E. Earthquake and Recurrent Interval of Great Earthquake in North Eastern Himalaya, India
Authors: Rao Singh Priyanka, Jayangondaperumal R.
Abstract:
The Himalayan active fault has the potential to generate great earthquakes in the future, posing a biggest existential threat to humans in the Himalayan and adjacent region. Quantitative evaluation of accumulated and released interseismic strain is crucial to assess the magnitude and spatio-temporal variability of future great earthquakes along the Himalayan arc. To mitigate the destruction and hazards associated with such earthquakes, it is important to understand their recurrence cycle. The eastern Himalayan and Indo-Burman plate boundary systems offers an oblique convergence across two orthogonal plate boundaries, resulting in a zone of distributed deformation both within and away from the plate boundary and clockwise rotation of fault-bounded blocks. This seismically active region has poorly documented historical archive of the past large earthquakes. Thus, paleoseismologicalstudies confirm the surface rupture evidences of the great continental earthquakes (Mw ≥ 8) along the Himalayan Frontal Thrust (HFT), which along with the Geodetic studies, collectively provide the crucial information to understand and assess the seismic potential. These investigations reveal the rupture of 3/4th of the HFT during great events since medieval time but with debatable opinions for the timing of events due to unclear evidences, ignorance of transverse segment boundaries, and lack of detail studies. Recent paleoseismological investigations in the eastern Himalaya and Mishmi ranges confirms the primary surface ruptures of the 1950 C.E. great earthquake (M>8). However, a seismic gap exists between the 1714 C.E. and 1950 C.E. Assam earthquakes that did not slip since 1697 C.E. event. Unlike the latest large blind 2015 Gorkha earthquake (Mw 7.8), the 1950 C.E. event is not triggered by a large event of 1947 C.E. that occurred near the western edge of the great upper Assam event. Moreover, the western segment of the eastern Himalayadid not witness any surface breaking earthquake along the HFT for over the past 300 yr. The frontal fault excavations reveal that during the 1950 earthquake, ~3.1-m-high scarp along the HFT was formed due to the co-seismic slip of 5.5 ± 0.7 m at Pasighat in the Eastern Himalaya and a 10-m-high-scarp at a Kamlang Nagar along the Mishmi Thrust in the Eastern Himalayan Syntaxis is an outcome of a dip-slip displacement of 24.6 ± 4.6 m along a 25 ± 5°E dipping fault. This event has ruptured along the two orthogonal fault systems in the form of oblique thrust fault mechanism. Approx. 130 km west of Pasighat site, the Himebasti village has witnessed two earthquakes, the historical 1697 Sadiya earthquake, and the 1950 event, with a cumulative dip-slip displacement of 15.32 ± 4.69 m. At Niglok site, Arunachal Pradesh, a cumulative slip of ~12.82 m during at least three events since pre 19585 B.P. has produced ~6.2-m high scarp while the youngest scarp of ~2.4-m height has been produced during 1697 C.E. The site preserves two deformational events along the eastern HFT, providing an idea of last serial ruptures at an interval of ~850 yearswhile the successive surface rupturing earthquakes lacks in the Mishmi Range to estimate the recurrence cycle.Keywords: paleoseismology, surface rupture, recurrence interval, Eastern Himalaya
Procedia PDF Downloads 84340 Health Risk Assessment from Potable Water Containing Tritium and Heavy Metals
Authors: Olga A. Momot, Boris I. Synzynys, Alla A. Oudalova
Abstract:
Obninsk is situated in the Kaluga region 100 km southwest of Moscow on the left bank of the Protva River. Several enterprises utilizing nuclear energy are operating in the town. A special attention in the region where radiation-hazardous facilities are located has traditionally been paid to radioactive gas and aerosol releases into the atmosphere; liquid waste discharges into the Protva river and groundwater pollution. Municipal intakes involve 34 wells arranged 15 km apart in a sequence north-south along the foot of the left slope of the Protva river valley. Northern and southern water intakes are upstream and downstream of the town, respectively. They belong to river valley intakes with mixed feeding, i.e. precipitation infiltration is responsible for a smaller part of groundwater, and a greater amount is being formed by overflowing from Protva. Water intakes are maintained by the Protva river runoff, the volume of which depends on the precipitation fallen out and watershed area. Groundwater contamination with tritium was first detected in a sanitary-protective zone of the Institute of Physics and Power Engineering (SRC-IPPE) by Roshydromet researchers when realizing the “Program of radiological monitoring in the territory of nuclear industry enterprises”. A comprehensive survey of the SRC-IPPE’s industrial site and adjacent territories has revealed that research nuclear reactors and accelerators where tritium targets are applied as well as radioactive waste storages could be considered as potential sources of technogenic tritium. All the above sources are located within the sanitary controlled area of intakes. Tritium activity in water of springs and wells near the SRC-IPPE is about 17.4 – 3200 Bq/l. The observed values of tritium activity are below the intervention levels (7600 Bq/l for inorganic compounds and 3300 Bq/l for organically bound tritium). The risk has being assessed to estimate possible effect of considered tritium concentrations on human health. Data on tritium concentrations in pipe-line drinking water were used for calculations. The activity of 3H amounted to 10.6 Bq/l and corresponded to the risk of such water consumption of ~ 3·10-7 year-1. The risk value given in magnitude is close to the individual annual death risk for population living near a NPP – 1.6·10-8 year-1 and at the same time corresponds to the level of tolerable risk (10-6) and falls within “risk optimization”, i.e. in the sphere for planning the economically sound measures on exposure risk reduction. To estimate the chemical risk, physical and chemical analysis was made of waters from all springs and wells near the SRC-IPPE. Chemical risk from groundwater contamination was estimated according to the EPA US guidance. The risk of carcinogenic diseases at a drinking water consumption amounts to 5·10-5. According to the classification accepted the health risk in case of spring water consumption is inadmissible. The compared assessments of risk associated with tritium exposure, on the one hand, and the dangerous chemical (e.g. heavy metals) contamination of Obninsk drinking water, on the other hand, have confirmed that just these chemical pollutants are responsible for health risk.Keywords: radiation-hazardous facilities, water intakes, tritium, heavy metal, health risk
Procedia PDF Downloads 240339 Long-Term Exposure Assessments for Cooking Workers Exposed to Polycyclic Aromatic Hydrocarbons and Aldehydes Containing in Cooking Fumes
Authors: Chun-Yu Chen, Kua-Rong Wu, Yu-Cheng Chen, Perng-Jy Tsai
Abstract:
Cooking fumes are known containing polycyclic aromatic hydrocarbons (PAHs) and aldehydes, and some of them have been proven carcinogenic or possibly carcinogenic to humans. Considering their chronic health effects, long-term exposure data is required for assessing cooking workers’ lifetime health risks. Previous exposure assessment studies, due to both time and cost constraints, mostly were based on the cross-sectional data. Therefore, establishing a long-term exposure data has become an important issue for conducting health risk assessment for cooking workers. An approach was proposed in this study. Here, the generation rates of both PAHs and aldehydes from a cooking process were determined by placing a sampling train exactly under the under the exhaust fan under the both the total enclosure condition and normal operating condition, respectively. Subtracting the concentration collected by the former (representing the total emitted concentration) from that of the latter (representing the hood collected concentration), the fugitive emitted concentration was determined. The above data was further converted to determine the generation rates based on the flow rates specified for the exhaust fan. The determinations of the above generation rates were conducted in a testing chamber with a selected cooking process (deep-frying chicken nuggets under 3 L peanut oil at 200°C). The sampling train installed under the exhaust fan consisted respectively an IOM inhalable sampler with a glass fiber filter for collecting particle-phase PAHs, followed by a XAD-2 tube for gas-phase PAHs. The above was also used to sample aldehydes, however, installed with a filter pre-coated with DNPH, and followed by a 2,4-DNPH-cartridge for collecting particle-phase and gas-phase aldehydes, respectively. PAHs and aldehydes samples were analyzed by GC/MS-MS (Agilent 7890B), and HPLC-UV (HITACHI L-7100), respectively. The obtained generation rates of both PAHs and aldehydes were applied to the near-field/ far-field exposure model to estimate the exposures of cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration). For validating purposes, both PAHs and aldehydes samplings were conducted simultaneously using the same sampling train at both near-field and far-field sites of the testing chamber. The sampling results, together with the use of the mixed-effect model, were used to calibrate the estimated near-field/ far-field exposures. In the present study, the obtained emission rates were further converted to emission factor of both PAHs and aldehydes according to the amount of food oil consumed. Applying the long-term food oil consumption records, the emission rates for both PAHs and aldehydes were determined, and the long-term exposure databanks for cooks (the estimated near-field concentration), and helpers (the estimated far-field concentration) were then determined. Results show that the proposed approach was adequate to determine the generation rates of both PAHs and aldehydes under various fan exhaust flow rate conditions. The estimated near-field/ far-field exposures, though were significantly different from that obtained from the field, can be calibrated using the mixed effect model. Finally, the established long-term data bank could provide a useful basis for conducting long-term exposure assessments for cooking workers exposed to PAHs and aldehydes.Keywords: aldehydes, cooking oil fumes, long-term exposure assessment, modeling, polycyclic aromatic hydrocarbons (PAHs)
Procedia PDF Downloads 142338 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada
Authors: Stefan W. Kienzle
Abstract:
The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes
Procedia PDF Downloads 93337 Michel Foucault’s Docile Bodies and The Matrix Trilogy: A Close Reading Applied to the Human Pods and Growing Fields in the Films
Authors: Julian Iliev
Abstract:
The recent release of The Matrix Resurrections persuaded many film scholars that The Matrix trilogy had lost its appeal and its concepts were largely outdated. This study examines the human pods and growing fields in the trilogy. Their functionality is compared to Michel Foucault’s concept of docile bodies: linking fictional and contemporary worlds. This paradigm is scrutinized through surveillance literature. The analogy brings to light common elements of hidden surveillance practices in technologies. The comparison illustrates the effects of body manipulation portrayed in the movies and their relevance with contemporary surveillance practices. Many scholars have utilized a close reading methodology in film studies (J.Bizzocchi, J.Tanenbaum, P.Larsen, S. Herbrechter, and Deacon et al.). The use of a particular lens through which media text is examined is an indispensable factor that needs to be incorporated into the methodology. The study spotlights both scenes from the trilogy depicting the human pods and growing fields. The functionality of the pods and the fields compare directly with Foucault’s concept of docile bodies. By utilizing Foucault’s study as a lens, the research will unearth hidden components and insights into the films. Foucault recognizes three disciplines that produce docile bodies: 1) manipulation and the interchangeability of individual bodies, 2) elimination of unnecessary movements and management of time, and 3) command system guaranteeing constant supervision and continuity protection. These disciplines can be found in the pods and growing fields. Each body occupies a single pod aiding easier manipulation and fast interchangeability. The movement of the bodies in the pods is reduced to the absolute minimum. Thus, the body is transformed into the ultimate object of control – minimum movement correlates to maximum energy generation. Supervision is exercised by wiring the body with numerous types of cables. This ultimate supervision of body activity reduces the body’s purpose to mere functioning. If a body does not function as an energy source, then it’s unplugged, ejected, and liquefied. The command system secures the constant supervision and continuity of the process. To Foucault, the disciplines are distinctly different from slavery because they stop short of a total takeover of the bodies. This is a clear difference from the slave system implemented in the films. Even though their system might lack sophistication, it makes up for it in the elevation of functionality. Further, surveillance literature illustrates the connection between the generation of body energy in The Matrix trilogy to the generation of individual data in contemporary society. This study found that the three disciplines producing docile bodies were present in the portrayal of the pods and fields in The Matrix trilogy. The above comparison combined with surveillance literature yields insights into analogous processes and contemporary surveillance practices. Thus, the constant generation of energy in The Matrix trilogy can be equated to the consistent data generation in contemporary society. This essay shows the relevance of the body manipulation concept in the Matrix films with contemporary surveillance practices.Keywords: docile bodies, film trilogies, matrix movies, michel foucault, privacy loss, surveillance
Procedia PDF Downloads 93336 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications
Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino
Abstract:
The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses
Procedia PDF Downloads 181335 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters
Authors: Trevor C. Brown, David J. Miron
Abstract:
Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics
Procedia PDF Downloads 234334 A Wasp Parasitoids of Genus Cotesia (Hymenoptera: Braconidae) Naturally Parasitizing Pectinophora gossypiella (Saunders) on Transgenic Cotton in Indian Punjab
Authors: Vijay Kumar, G. K. Grewal, Prasad S. Burange
Abstract:
India is one of the largest cultivators of cotton in the world. Among the various constraints, insect pests are posing a major hurdle to the success of cotton cultivation. Various bollworms, including the pink bollworm, Pectinophora gossypiella (Saunders), cause serious losses in India, China, Pakistan, Egypt, Brazil, tropical America, and Africa, etc. Bt cotton cultivars having Cry genes were introduced in India in 2002 (Cry1Ac) and 2006 (Cry1Ac+ Cry2Ab) for control of American, spotted, and pink bollworms. Pink bollworm (PBW) larvae infest flowers, squares, and bolls. Larva burrows into flowers and bolls to feed on pollen and seeds, respectively. It has a shorter lifecycle and more generations per year, so it develops resistance more quickly than other bollworms. Further, it has cryptic feeding sites, i.e., flowers and bolls/seeds, so it is not exposed to harsh environmental fluctuations and insecticidal applications. The cry toxin concentration is low in its feeding sites, i.e., seeds and flowers of cotton. The use of insecticide and Bt cotton is the primary control measure that has been successful in limiting the damage of PBW. But with the passage of time, it has developed resistance against insecticides and Bt cotton. However, the use of insecticides increases chemical control costs while causing secondary pest problems and environmental pollution. Extensive research has indicated that monitoring and control measures such as biological, cultural, chemical, and host plant resistance methods can be integrated for effective PBW management. The potential of various biological control organisms needs to be explored. The impact of transgenic cotton on non-target organisms, particularly natural enemies, which play an important role in pest control, is still being debated. According to some authors, Bt crops have a negative impact on natural enemies, particularly parasitoids. An experiment was carried out in the Integrated Pest Management Laboratory of the Department of Entomology, Punjab Agricultural University, Ludhiana, Punjab, India, to study the natural parasitization of PBW on Bt cotton in 2022. A large population of larvae of PBW were kept individually in plastic containers and fed with cotton bolls until the emergence of a parasitoid cocoon. The first cocoon of the parasitoid was observed on October 25, 2022. Symptoms of parasitization were never seen on larvae. Larvae stopped feeding and became inactive before the emergence of parasitoids for pupation. Grub makes its way out of larvae by making a hole in the integument, and immediately after coming out, it spins the cocoon. The adult parasitoid emerged from the cocoon after eight days. The parasitoids that emerged from the cocoon were identified as Cotesia (Braconidae: Hymenoptera) based on the features of the adult. Out of 475 larvae of PBW, 87 were parasitized, with 18.31% of parasitization. Out of these, 6.73% were first instar, 10.52% were second instar, and 1.05% were third instar larvae of PBW. No parasitization was observed in fourth instar larvae. Parasitoids were observed during the fag end of cropping season and mostly on the earlier instars. It is concluded that the potential of Cotesia may be explored as a biological control agent against PBW, which is safer to human beings, environment and non-taraltoget organisms.Keywords: biocontrol, Bt cotton, Cotesia, Pectinophora gossypiella
Procedia PDF Downloads 84333 Application of Self-Efficacy Theory in Counseling Deaf and Hard of Hearing Students
Authors: Nancy A. Delich, Stephen D. Roberts
Abstract:
This case study explores using self-efficacy theory in counseling deaf and hard of hearing students in one California school district. Self-efficacy is described as the confidence a student has for performing a set of skills required to succeed at a specific task. When students need to learn a skill, self-efficacy can be a major factor in influencing behavioral change. Self-efficacy is domain specific, meaning that students can have high confidence in their abilities to accomplish a task in one domain, while at the same time having low confidence in their abilities to accomplish another task in a different domain. The communication isolation experienced by deaf and hard of hearing children and adolescents can negatively impact their belief about their ability to navigate life challenges. There is a need to address issues that impact deaf and hard of hearing students’ social-emotional development. Failure to address these needs may result in depression, suicidal ideation, and anxiety among other mental health concerns. Self-efficacy training can be used to address these socio-emotional developmental issues with this population. Four sources of experiences are applied during an intervention: (a) enactive mastery experience, (b) vicarious experience, (c) verbal persuasion, and (d) physiological and affective states. This case study describes the use of self-efficacy training with a coed group of 12 deaf and hard of hearing high school students who experienced bullying at school. Beginning with enactive mastery experience, the counselor introduced the topic of bullying to the group. The counselor educated the students about the different types of bullying while teaching them the terminology, signs and their meanings. The most effective way to increase self-efficacy is through extensive practice. To better understand these concepts, the students practiced through role-playing with the goal of developing self-advocacy skills. Vicarious experience is the perception that students have about their capabilities. Viewing other students advocating for themselves, cognitively rehearsing what actions they will and will not take, and teaching each other how to stand up against bullying can strengthen their belief in successfully overcoming bullying. The third source of self-efficacy beliefs is verbal persuasion. It occurs when others express belief in the capabilities of the student. Didactic training and pedagogic materials on bullying were employed as part of the group counseling sessions. The fourth source of self-efficacy appraisals is physiological and affective states. Students expect positive emotions to be associated with successful skilled performance. When students practice new skills, the counselor can apply several strategies to enhance self-efficacy while reducing and controlling emotional and physical states. The intervention plan incorporated all four sources of self-efficacy training during several interactive group sessions regarding bullying. There was an increased understanding around the issues of bullying, resulting in the students’ belief of their ability to perform protective behaviors and deter future occurrences. The outcome of the intervention plan resulted in a reduction of reported bullying incidents. In conclusion, self-efficacy training can be an effective counseling and teaching strategy in addressing and enhancing the social-emotional functioning with deaf and hard of hearing adolescents.Keywords: counseling, self-efficacy, bullying, social-emotional development, mental health, deaf and hard of hearing students
Procedia PDF Downloads 354332 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit
Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili
Abstract:
Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain
Procedia PDF Downloads 177331 Development of PCL/Chitosan Core-Shell Electrospun Structures
Authors: Hilal T. Sasmazel, Seda Surucu
Abstract:
Skin tissue engineering is a promising field for the treatment of skin defects using scaffolds. This approach involves the use of living cells and biomaterials to restore, maintain, or regenerate tissues and organs in the body by providing; (i) larger surface area for cell attachment, (ii) proper porosity for cell colonization and cell to cell interaction, and (iii) 3-dimensionality at macroscopic scale. Recent studies on this area mainly focus on fabrication of scaffolds that can closely mimic the natural extracellular matrix (ECM) for creation of tissue specific niche-like environment at the subcellular scale. Scaffolds designed as ECM-like architectures incorporating into the host with minimal scarring/pain and facilitate angiogenesis. This study is related to combining of synthetic PCL and natural chitosan polymers to form 3D PCL/Chitosan core-shell structures for skin tissue engineering applications. Amongst the polymers used in tissue engineering, natural polymer chitosan and synthetic polymer poly(ε-caprolactone) (PCL) are widely preferred in the literature. Chitosan has been among researchers for a very long time because of its superior biocompatibility and structural resemblance to the glycosaminoglycan of bone tissue. However, the low mechanical flexibility and limited biodegradability properties reveals the necessity of using this polymer in a composite structure. On the other hand, PCL is a versatile polymer due to its low melting point (60°C), ease of processability, degradability with non-enzymatic processes (hydrolysis) and good mechanical properties. Nevertheless, there are also several disadvantages of PCL such as its hydrophobic structure, limited bio-interaction and susceptibility to bacterial biodegradation. Therefore, it became crucial to use both of these polymers together as a hybrid material in order to overcome the disadvantages of both polymers and combine advantages of those. The scaffolds here were fabricated by using electrospinning technique and the characterizations of the samples were done by contact angle (CA) measurements, scanning electron microscopy (SEM), transmission electron microscopy (TEM) and X-Ray Photoelectron spectroscopy (XPS). Additionally, gas permeability test, mechanical test, thickness measurement and PBS absorption and shrinkage tests were performed for all type of scaffolds (PCL, chitosan and PCL/chitosan core-shell). By using ImageJ launcher software program (USA) from SEM photographs the average inter-fiber diameter values were calculated as 0.717±0.198 µm for PCL, 0.660±0.070 µm for chitosan and 0.412±0.339 µm for PCL/chitosan core-shell structures. Additionally, the average inter-fiber pore size values exhibited decrease of 66.91% and 61.90% for the PCL and chitosan structures respectively, compare to PCL/chitosan core-shell structures. TEM images proved that homogenous and continuous bead free core-shell fibers were obtained. XPS analysis of the PCL/chitosan core-shell structures exhibited the characteristic peaks of PCL and chitosan polymers. Measured average gas permeability value of produced PCL/chitosan core-shell structure was determined 2315±3.4 g.m-2.day-1. In the future, cell-material interactions of those developed PCL/chitosan core-shell structures will be carried out with L929 ATCC CCL-1 mouse fibroblast cell line. Standard MTT assay and microscopic imaging methods will be used for the investigation of the cell attachment, proliferation and growth capacities of the developed materials.Keywords: chitosan, coaxial electrospinning, core-shell, PCL, tissue scaffold
Procedia PDF Downloads 481330 Application of Pedicled Perforator Flaps in Large Cavities of the Breast
Authors: Neerja Gupta
Abstract:
Objective-Reconstruction of large cavities of the breast without contralateral symmetrisation Background- Reconstruction of breast includes a wide spectrum of procedures from displacement to regional and distant flaps. The pedicled Perforator flaps cover a wide spectrum of reconstruction surgery for all quadrants of the breast, especially in patients with comorbidities. These axial flaps singly or adjunct are based on a near constant perforator vessel, a ratio of 2:1 at its entry in a flap is good to maintain vascularity. The perforators of lateral chest wall viz LICAP, LTAP have overlapping perfurosomes without clear demarcation. LTAP is localized in the narrow zone between the lateral breast fold and anterior axillary line,2.5-3.8cm from the fold. MICAP are localized at 1-2 cm from sternum. Being 1-2mm in diameter, a Single perforator is good to maintain the flap. LICAP has a dominant perforator in 6th-11th spaces, while LTAP has higher placed dominant perforators in 4th and 5th spaces. Methodology-Six consecutive patients who underwent reconstruction of the breast with pedicled perforator flaps were retrospectively analysed. Selections of the flap was done based on the size and locations of the tumour, anticipated volume loss, willingness to undergo contralateral symmetrisation, cosmetic expectations, and finances available.3 patients underwent vertical LTAP, the distal limit of the flap being the inframammary crease. 3 patients underwent MICAP, oriented along the axis of rib, the distal limit being the anterior axillary line. Preoperative identification was done using a unidirectional hand held doppler. The flap was raised caudal to cranial, the pivot point of rotation being the vessel entry into the skin. The donor area is determined by the skin pinch. Flap harvest time was 20-25 minutes. Intra operative vascularity was assessed with dermal bleed. The patient immediate pre, post-operative and follow up pics were compared independently by two breast surgeons. Patients were given a breast Q questionnaire (licensed) for scoring. Results-The median age of six patients was 46. Each patient had a hospital stay of 24 hours. None of the patients was willing for contralateral symmetrisation. The specimen dimensions were from 8x6.8x4 cm to 19x16x9 cm. The breast volume reconstructed range was 30 percent to 45 percent. All wide excision had free margins on frozen. The mean flap dimensions were 12x5x4.5 cm. One LTAP underwent marginal necrosis and delayed wound healing due to seroma. Three patients were phyllodes, of which one was borderline, and 2 were benign on final histopathology. All other 3 patients were invasive ductal cancer and have completed their radiation. The median follow up is 7 months the satisfaction scores at median follow of 7 months are 90 for physical wellbeing and 85 for surgical results. Surgeons scored fair to good in Harvard score. Conclusion- Pedicled perforator flaps are a valuable option for 3/8th volume of breast defects. LTAP is preferred for tumours at the Central, upper, and outer quadrants of the breast and MICAP for the inner and lower quadrant. The vascularity of the flap is dependent on the angiosomalterritories; adequate venous and cavity drainage.Keywords: breast, oncoplasty, pedicled, perforator
Procedia PDF Downloads 187329 The 10,000 Fold Effect of Retrograde Neurotransmission: A New Concept for Cerebral Palsy Revival by the Use of Nitric Oxide Donars
Authors: V. K. Tewari, M. Hussain, H. K. D. Gupta
Abstract:
Background: Nitric Oxide Donars (NODs) (intrathecal sodium nitroprusside (ITSNP) and oral tadalafil 20mg post ITSNP) has been studied in this context in cerebral palsy patients for fast recovery. This work proposes two mechanisms for acute cases and one mechanism for chronic cases, which are interrelated, for physiological recovery. a) Retrograde Neurotransmission (acute cases): 1) Normal excitatory impulse: at the synaptic level, glutamate activates NMDA receptors, with nitric oxide synthetase (NOS) on the postsynaptic membrane, for further propagation by the calcium-calmodulin complex. Nitric oxide (NO, produced by NOS) travels backward across the chemical synapse and binds the axon-terminal NO receptor/sGC of a presynaptic neuron, regulating anterograde neurotransmission (ANT) via retrograde neurotransmission (RNT). Heme is the ligand-binding site of the NO receptor/sGC. Heme exhibits > 10,000-fold higher affinity for NO than for oxygen (the 10,000-fold effect) and is completed in 20 msec. 2) Pathological conditions: normal synaptic activity, including both ANT and RNT, is absent. A NO donor (SNP) releases NO from NOS in the postsynaptic region. NO travels backward across a chemical synapse to bind to the heme of a NO receptor in the axon terminal of a presynaptic neuron, generating an impulse, as under normal conditions. b) Vasopasm: (acute cases) Perforators show vasospastic activity. NO vasodilates the perforators via the NO-cAMP pathway. c) Long-Term Potentiation (LTP): (chronic cases) The NO–cGMP-pathway plays a role in LTP at many synapses throughout the CNS and at the neuromuscular junction. LTP has been reviewed both generally and with respect to brain regions specific for memory/learning. Aims/Study Design: The principles of “generation of impulses from the presynaptic region to the postsynaptic region by very potent RNT (10,000-fold effect)” and “vasodilation of arteriolar perforators” are the basis of the authors’ hypothesis to treat cerebral palsy cases. Case-control prospective study. Materials and Methods: The experimental population included 82 cerebral palsy patients (10 patients were given control treatments without NOD or with 5% dextrose superfusion, and 72 patients comprised the NOD group). The mean time for superfusion was 5 months post-cerebral palsy. Pre- and post-NOD status was monitored by Gross Motor Function Classification System for Cerebral Palsy (GMFCS), MRI, and TCD studies. Results: After 7 days in the NOD group, the mean change in the GMFCS score was an increase of 1.2 points mean; after 3 months, there was an increase of 3.4 points mean, compared to the control-group increase of 0.1 points at 3 months. MRI and TCD documented the improvements. Conclusions: NOD (ITSNP boosts up the recovery and oral tadalafil maintains the recovery to a well-desired level) acts swiftly in the treatment of CP, acting within 7 days on 5 months post-cerebral palsy either of the three mechanisms.Keywords: cerebral palsy, intrathecal sodium nitroprusside, oral tadalafil, perforators, vasodilations, retrograde transmission, the 10, 000-fold effect, long-term potantiation
Procedia PDF Downloads 363328 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples
Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges
Abstract:
Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review
Procedia PDF Downloads 184327 Improving Recovery Reuse and Irrigation Scheme Efficiency – North Gaza Emergency Sewage Treatment Project as Case Study
Authors: Yaser S. Kishawi, Sadi R. Ali
Abstract:
Part of Palestine, Gaza Strip (365 km2 and 1.8 million inhabitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely cover the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is finding non-conventional water resource from treated wastewater to cover agricultural requirements and serve the population. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, only phase A is functioning. Nearly 23 Mm3 of partially treated wastewater were infiltrated into the aquifer. Phase B and phase C witnessed many delays and this forced a reassessment of the RRS original design. An Environmental Management Plan was conducted from Jul 2013 to Jun 2014 on 13 existing monitoring wells surrounding the project location. This is to measure the efficiency of the SAT system and the spread of the contamination plume with relation to the efficiency of the proposed RRS. Along with the proposed location of the 27 recovery wells as part of the proposed RRS. The results of monitored wells were assessed compared with PWA baseline data. This was put into a groundwater model to simulate the plume to propose the best suitable solution to the delays. The redesign mainly manipulated the pumping rate of wells, proposed locations and functioning schedules (including wells groupings). The proposed simulations were examined using visual MODFLOW V4.2 to simulate the results. The results of monitored wells were assessed based on the location of the monitoring wells related to the proposed recovery wells locations (200m, 500m and 750m away from the IBs). Near the 500m line (the first row of proposed recovery wells), an increase of nitrate (from 30 to 70mg/L) compare to a decrease in Chloride (1500 to below 900mg/L) was found during the monitoring period which indicated an expansion of plume to this distance. On this rate with the required time to construct the recovery scheme, keeping the original design the RRS will fail to capture the plume. Based on that many simulations were conducted leading into three main scenarios. The scenarios manipulated the starting dates, the pumping rate and the locations of recovery wells. A simulation of plume expansion and path-lines were extracted from the model monitoring how to prevent the expansion towards the nearby municipal wells. It was concluded that the location is the most important factor in determining the RRS efficiency. Scenario III was adopted and showed an effective results even with a reduced pumping rates. This scenario proposed adding two additional recovery wells in a location beyond the 750m line to compensate the delays and effectively capture the plume. A continuous monitoring program for current and future monitoring wells should be in place to support the proposed scenario and ensure maximum protection.Keywords: soil aquifer treatment, recovery and reuse scheme, infiltration basins, north gaza
Procedia PDF Downloads 314326 Stuck Spaces as Moments of Learning: Uncovering Threshold Concepts in Teacher Candidate Experiences of Teaching in Inclusive Classrooms
Authors: Joy Chadwick
Abstract:
There is no doubt that classrooms of today are more complex and diverse than ever before. Preparing teacher candidates to meet these challenges is essential to ensure the retention of teachers within the profession and to ensure that graduates begin their teaching careers with the knowledge and understanding of how to effectively meet the diversity of students they will encounter. Creating inclusive classrooms requires teachers to have a repertoire of effective instructional skills and strategies. Teachers must also have the mindset to embrace diversity and value the uniqueness of individual students in their care. This qualitative study analyzed teacher candidates' experiences as they completed a fourteen-week teaching practicum while simultaneously completing a university course focused on inclusive pedagogy. The research investigated the challenges and successes teacher candidates had in navigating the translation of theory related to inclusive pedagogy into their teaching practice. Applying threshold concept theory as a framework, the research explored the troublesome concepts, liminal spaces, and transformative experiences as connected to inclusive practices. Threshold concept theory suggests that within all disciplinary fields, there exists particular threshold concepts that serve as gateways or portals into previously inaccessible ways of thinking and practicing. It is in these liminal spaces that conceptual shifts in thinking and understanding and deep learning can occur. The threshold concept framework provided a lens to examine teacher candidate struggles and successes with the inclusive education course content and the application of this content to their practicum experiences. A qualitative research approach was used, which included analyzing twenty-nine course reflective journals and six follow up one-to-one semi structured interviews. The journals and interview transcripts were coded and themed using NVivo software. Threshold concept theory was then applied to the data to uncover the liminal or stuck spaces of learning and the ways in which the teacher candidates navigated those challenging places of teaching. The research also sought to uncover potential transformative shifts in teacher candidate understanding as connected to teaching in an inclusive classroom. The findings suggested that teacher candidates experienced difficulties when they did not feel they had the knowledge, skill, or time to meet the needs of the students in the way they envisioned they should. To navigate the frustration of this thwarted vision, they relied on present and previous course content and experiences, collaborative work with other teacher candidates and their mentor teachers, and a proactive approach to planning for students. Transformational shifts were most evident in their ability to reframe their perceptions of children from a deficit or disability lens to a strength-based belief in the potential of students. It was evident that through their course work and practicum experiences, their beliefs regarding struggling students shifted as they saw the value of embracing neurodiversity, the importance of relationships, and planning for and teaching through a strength-based approach. Research findings have implications for teacher education programs and for understanding threshold concepts theory as connected to practice-based learning experiences.Keywords: inclusion, inclusive education, liminal space, teacher education, threshold concepts, troublesome knowledge
Procedia PDF Downloads 79325 qPCR Method for Detection of Halal Food Adulteration
Authors: Gabriela Borilova, Monika Petrakova, Petr Kralik
Abstract:
Nowadays, European producers are increasingly interested in the production of halal meat products. Halal meat has been increasingly appearing in the EU's market network and meat products from European producers are being exported to Islamic countries. Halal criteria are mainly related to the origin of muscle used in production, and also to the way products are obtained and processed. Although the EU has legislatively addressed the question of food authenticity, the circumstances of previous years when products with undeclared horse or poultry meat content appeared on EU markets raised the question of the effectiveness of control mechanisms. Replacement of expensive or not-available types of meat for low-priced meat has been on a global scale for a long time. Likewise, halal products may be contaminated (falsified) by pork or food components obtained from pigs. These components include collagen, offal, pork fat, mechanically separated pork, emulsifier, blood, dried blood, dried blood plasma, gelatin, and others. These substances can influence sensory properties of the meat products - color, aroma, flavor, consistency and texture or they are added for preservation and stabilization. Food manufacturers sometimes access these substances mainly due to their dense availability and low prices. However, the use of these substances is not always declared on the product packaging. Verification of the presence of declared ingredients, including the detection of undeclared ingredients, are among the basic control procedures for determining the authenticity of food. Molecular biology methods, based on DNA analysis, offer rapid and sensitive testing. The PCR method and its modification can be successfully used to identify animal species in single- and multi-ingredient raw and processed foods and qPCR is the first choice for food analysis. Like all PCR-based methods, it is simple to implement and its greatest advantage is the absence of post-PCR visualization by electrophoresis. qPCR allows detection of trace amounts of nucleic acids, and by comparing an unknown sample with a calibration curve, it can also provide information on the absolute quantity of individual components in the sample. Our study addresses a problem that is related to the fact that the molecular biological approach of most of the work associated with the identification and quantification of animal species is based on the construction of specific primers amplifying the selected section of the mitochondrial genome. In addition, the sections amplified in conventional PCR are relatively long (hundreds of bp) and unsuitable for use in qPCR, because in DNA fragmentation, amplification of long target sequences is quite limited. Our study focuses on finding a suitable genomic DNA target and optimizing qPCR to reduce variability and distortion of results, which is necessary for the correct interpretation of quantification results. In halal products, the impact of falsification of meat products by the addition of components derived from pigs is all the greater that it is not just about the economic aspect but above all about the religious and social aspect. This work was supported by the Ministry of Agriculture of the Czech Republic (QJ1530107).Keywords: food fraud, halal food, pork, qPCR
Procedia PDF Downloads 247324 Improvement of Autism Diagnostic Observation Schedule Scores after Comprehensive Intensive Early Interventions in a Clinical Setting
Authors: Nils Haglund, Svenolof Dahlgren, Maria Rastam, Peik Gustafsson, Karin Kalien
Abstract:
In Sweden, like in most developed countries, there is a substantial increase of children diagnosed with autism and other conditions within the autism spectrum (ASD). The rapid increase of ASD rates stresses the importance of developing care programs to provide support and comprehensive interventions for affected families. The current observational study was conducted in order to evaluate an ongoing Comprehensive Intensive Early Intervention (CIEI) program for children with autism in southern Sweden. The change in autism symptoms among children participating in CIEI (intervention group, n=67) was compared with children who received traditional habilitation services only (comparison group, n=27). Children of parents who accepted the offered CIEI-program, constituted the intervention group, whereas children, whose parents (for some reason) were not interested in the offered CIEI-program, constituted the comparison group. The CIEI-program was individualized to each child by experienced applied behavior analysis (ABA) specialists with different backgrounds as psychologists, speech pathologists or special education teachers, in cooperation with parents and preschool staff. Due to the individualization, the intervention could vary in intensity and techniques. The intensity was calculated to 15-25 hours each week at home and the preschool altogether. Each child was assigned one 'trainer', who was often employed as a preschool teacher but could have another educational background. An agreement between supervisor- parents and preschool staff was reached to confirm the intensity and content of the CIEI- program over an approximately two-year intervention period. Symptom changes were measured as evaluation-ADOS-2-scores, total- and severity-scores, minus the corresponding baseline-scores, divided by the time between baseline and evaluation. The difference between the study-groups regarding change of ADOS-2-scores was estimated using ANCOVA. In the current study, children in the CIEI-group improved their ADOS-2-total scores between baseline and evaluation (-0.8 scores per year; 95%CI: -1.2 to -0.4), whereas no such improvement was detected in the comparison group (+0.1 scores per year; 95%CI: -0.7 to +0.9). The change difference (change in the CIEI-group vs. change in the comparison group) was statistically significant, both crude and after adjusting for possible confounders (-1.1; 95%CI -1.9 to -0.4). Children in the CIEI-group also significantly improved their ADOS-calibrated severity scores, but not significantly differently so from the comparison group. The results from the current study indicate that the CIEI program significantly improves social and communicative skills among children with autism and that children with developmental delay could benefit to a similar degree as other children. The results support earlier studies reporting on the improvement of autism symptoms after early intensive interventions. The results from observational studies are difficult to interpret, but it is nevertheless of uttermost importance to evaluate costly autism intervention programs. Such results may be of immediate importance to healthcare organizations when allocating the already strained resources to different patient groups. Albeit the obvious limitation of the current naturalistic study, the results support previous positive studies and indicate that children with autism benefit from participating in early comprehensive, intensive programs and that investments in these programs may be highly justifiable.Keywords: autism symptoms, ADOS-scores, evaluation, intervention program
Procedia PDF Downloads 145323 Asparagus racemosus Willd for Enhanced Medicinal Properties
Authors: Ashok Kumar, Parveen Parveen
Abstract:
India is bestowed with an extremely high population of plant species with medicinal value and even has two biodiversity hotspots. Indian systems of medicine including Ayurveda, Siddha and Unani have historically been serving humankind across the world since time immemorial. About 1500 plant species have well been documented in Ayurvedic Nighantus as official medicinal plants. Additionally, several hundred species of plants are being routinely used as medicines by local people especially tribes living in and around forests. The natural resources for medicinal plants have unscientifically been over-exploited forcing rapid depletion in their genetic diversity. Moreover, renewed global interest in herbal medicines may even lead to additional depletion of medicinal plant wealth of the country, as about 95% collection of medicinal plants for pharmaceutical preparation is being carried out from natural forests. On the other hand, huge export market of medicinal and aromatic plants needs to be seriously tapped for enhancing inflow of foreign currency. Asparagus racemosus Willd., a member of family Liliaceae, is one of thirty-two plant species that have been identified as priority species for cultivation and conservation by the National Medicinal Plant Board (NMPB), Government of India. Though attention is being focused on standardization of agro-techniques and extraction methods, little has been designed on genetic improvement and selection of desired types with higher root production and saponin content, a basic ingredient of medicinal value. The saponin not only improves defense mechanisms and controls diabetes but the roots of this species promote secretion of breast milk, improved lost body weight and considered as an aphrodisiac. There is ample scope for genetic improvement of this species for enhancing productivity substantially, qualitatively and quantitatively. It is emphasized to select desired genotypes with sufficient genetic diversity for important economic traits. Hybridization between two genetically divergent genotypes could result in the synthesis of new F1 hybrids consisting of useful traits of both the parents. The evaluation of twenty seed sources of Asparagus racemosus assembled different geographical locations of India revelled high degree of variability for traits of economic importance. The maximum genotypic and phenotypic variance was observed for shoot height among shoot related traits and for root length among root related traits. The shoot height, genotypic variance, phenotypic variance, genotypic coefficient of variance, the phenotypic coefficient of variance was recorded to be 231.80, 3924.80, 61.26 and 1037.32, respectively, where those of the root length were 9.55, 16.80, 23.46 and 41.27, respectively. The maximum genetic advance and genetic gain were obtained for shoot height among shoot-related traits and root length among root-related traits. Index values were developed for all seed sources based on the four most important traits, and Panthnagar (Uttrakhand), Jodhpur (Rajasthan), Dehradun (Uttarakhand), Chandigarh (Punjab), Jammu (Jammu & Kashmir) and Solan (Himachal Pradesh) were found to be promising seed sources.Keywords: asparagus, genetic, genotypes, variance
Procedia PDF Downloads 134322 Co2e Sequestration via High Yield Crops and Methane Capture for ZEV Sustainable Aviation Fuel
Authors: Bill Wason
Abstract:
143 Crude Palm Oil Coop mills on Sumatra Island are participating in a program to transfer land from defaulted estates to small farmers while improving the sustainability of palm production to allow for biofuel & food production. GCarbon will be working with farmers to transfer technology, fertilizer, and trees to double the yield from the current baseline of 3.5 tons to at least 7 tons of oil per ha (25 tons of fruit bunches). This will be measured via evaluation of yield comparisons between participant and non-participant farms. We will also capture methane from Palm Oil Mill Effluent (POME)throughbelt press filtering. Residues will be weighed and a formula used to estimate methane emission reductions based on methodologies developed by other researchers. GCarbon will also cover mill ponds with a non-permeable membrane and collect methane for energy or steam production. A system for accelerating methane production involving ozone and electro-flocculation will be tested to intensifymethane generation and reduce the time for wastewater treatment. A meta-analysis of research on sweet potatoes and sorghum as rotation crops will look at work in the Rio Grande do Sul, Brazil where5 ha. oftest plots of industrial sweet potato have achieved yields of 60 tons and 40 tons per ha. from 2 harvests in one year (100 MT/ha./year). Field trials will be duplicated in Bom Jesus Das Selvas, Maranhaothat will test varieties of sweet potatoes to measure yields and evaluate disease risks in a very different soil and climate of NE Brazil. Hog methane will also be captured. GCarbon Brazil, Coop Sisal, and an Australian research partner will plant several varieties of agave and use agronomic procedures to get yields of 880 MT per ha. over 5 years. They will also plant new varieties expected to get 3500 MT of biomass after 5 years (176-700 MT per ha. per year). The goal is to show that the agave can adapt to Brazil’s climate without disease problems. The study will include a field visit to growing sites in Australia where agave is being grown commercially for biofuels production. Researchers will measure the biomass per hectare at various stages in the growing cycle, sugar content at harvest, and other metrics to confirm the yield of sugar per ha. is up to 10 times greater than sugar cane. The study will look at sequestration rates from measuring soil carbon and root accumulation in various plots in Australia to confirm carbon sequestered from 5 years of production. The agave developer estimates that 60-80 MT of sequestration per ha. per year occurs from agave. The three study efforts in 3 different countries will define a feedstock pathway for jet fuel that involves very high yield crops that can produce 2 to 10 times more biomass than current assumptions. This cost-effective and less land intensive strategy will meet global jet fuel demand and produce huge quantities of food for net zero aviation and feeding 9-10 billion people by 2050Keywords: zero emission SAF, methane capture, food-fuel integrated refining, new crops for SAF
Procedia PDF Downloads 103