Search results for: dioxin response elements
313 Trajectory Optimization for Autonomous Deep Space Missions
Authors: Anne Schattel, Mitja Echim, Christof Büskens
Abstract:
Trajectory planning for deep space missions has become a recent topic of great interest. Flying to space objects like asteroids provides two main challenges. One is to find rare earth elements, the other to gain scientific knowledge of the origin of the world. Due to the enormous spatial distances such explorer missions have to be performed unmanned and autonomously. The mathematical field of optimization and optimal control can be used to realize autonomous missions while protecting recourses and making them safer. The resulting algorithms may be applied to other, earth-bound applications like e.g. deep sea navigation and autonomous driving as well. The project KaNaRiA ('Kognitionsbasierte, autonome Navigation am Beispiel des Ressourcenabbaus im All') investigates the possibilities of cognitive autonomous navigation on the example of an asteroid mining mission, including the cruise phase and approach as well as the asteroid rendezvous, landing and surface exploration. To verify and test all methods an interactive, real-time capable simulation using virtual reality is developed under KaNaRiA. This paper focuses on the specific challenge of the guidance during the cruise phase of the spacecraft, i.e. trajectory optimization and optimal control, including first solutions and results. In principle there exist two ways to solve optimal control problems (OCPs), the so called indirect and direct methods. The indirect methods are being studied since several decades and their usage needs advanced skills regarding optimal control theory. The main idea of direct approaches, also known as transcription techniques, is to transform the infinite-dimensional OCP into a finite-dimensional non-linear optimization problem (NLP) via discretization of states and controls. These direct methods are applied in this paper. The resulting high dimensional NLP with constraints can be solved efficiently by special NLP methods, e.g. sequential quadratic programming (SQP) or interior point methods (IP). The movement of the spacecraft due to gravitational influences of the sun and other planets, as well as the thrust commands, is described through ordinary differential equations (ODEs). The competitive mission aims like short flight times and low energy consumption are considered by using a multi-criteria objective function. The resulting non-linear high-dimensional optimization problems are solved by using the software package WORHP ('We Optimize Really Huge Problems'), a software routine combining SQP at an outer level and IP to solve underlying quadratic subproblems. An application-adapted model of impulsive thrusting, as well as a model of an electrically powered spacecraft propulsion system, is introduced. Different priorities and possibilities of a space mission regarding energy cost and flight time duration are investigated by choosing different weighting factors for the multi-criteria objective function. Varying mission trajectories are analyzed and compared, both aiming at different destination asteroids and using different propulsion systems. For the transcription, the robust method of full discretization is used. The results strengthen the need for trajectory optimization as a foundation for autonomous decision making during deep space missions. Simultaneously they show the enormous increase in possibilities for flight maneuvers by being able to consider different and opposite mission objectives.Keywords: deep space navigation, guidance, multi-objective, non-linear optimization, optimal control, trajectory planning.
Procedia PDF Downloads 412312 Nonlinear Homogenized Continuum Approach for Determining Peak Horizontal Floor Acceleration of Old Masonry Buildings
Authors: Andreas Rudisch, Ralf Lampert, Andreas Kolbitsch
Abstract:
It is a well-known fact among the engineering community that earthquakes with comparatively low magnitudes can cause serious damage to nonstructural components (NSCs) of buildings, even when the supporting structure performs relatively well. Past research works focused mainly on NSCs of nuclear power plants and industrial plants. Particular attention should also be given to architectural façade elements of old masonry buildings (e.g. ornamental figures, balustrades, vases), which are very vulnerable under seismic excitation. Large numbers of these historical nonstructural components (HiNSCs) can be found in highly frequented historical city centers and in the event of failure, they pose a significant danger to persons. In order to estimate the vulnerability of acceleration sensitive HiNSCs, the peak horizontal floor acceleration (PHFA) is used. The PHFA depends on the dynamic characteristics of the building, the ground excitation, and induced nonlinearities. Consequently, the PHFA can not be generalized as a simple function of height. In the present research work, an extensive case study was conducted to investigate the influence of induced nonlinearity on the PHFA for old masonry buildings. Probabilistic nonlinear FE time-history analyses considering three different hazard levels were performed. A set of eighteen synthetically generated ground motions was used as input to the structure models. An elastoplastic macro-model (multiPlas) for nonlinear homogenized continuum FE-calculation was calibrated to multiple scales and applied, taking specific failure mechanisms of masonry into account. The macro-model was calibrated according to the results of specific laboratory and cyclic in situ shear tests. The nonlinear macro-model is based on the concept of multi-surface rate-independent plasticity. Material damage or crack formation are detected by reducing the initial strength after failure due to shear or tensile stress. As a result, shear forces can only be transmitted to a limited extent by friction when the cracking begins. The tensile strength is reduced to zero. The first goal of the calibration was the consistency of the load-displacement curves between experiment and simulation. The calibrated macro-model matches well with regard to the initial stiffness and the maximum horizontal load. Another goal was the correct reproduction of the observed crack image and the plastic strain activities. Again the macro-model proved to work well in this case and shows very good correlation. The results of the case study show that there is significant scatter in the absolute distribution of the PHFA between the applied ground excitations. An absolute distribution along the normalized building height was determined in the framework of probability theory. It can be observed that the extent of nonlinear behavior varies for the three hazard levels. Due to the detailed scope of the present research work, a robust comparison with code-recommendations and simplified PHFA distributions are possible. The chosen methodology offers a chance to determine the distribution of PHFA along the building height of old masonry structures. This permits a proper hazard assessment of HiNSCs under seismic loads.Keywords: nonlinear macro-model, nonstructural components, time-history analysis, unreinforced masonry
Procedia PDF Downloads 169311 Anti-proliferative Activity and HER2 Receptor Expression Analysis of MCF-7 (Breast Cancer Cell) Cells by Plant Extract Coleus Barbatus (Andrew)
Authors: Anupalli Roja Rani, Pavithra Dasari
Abstract:
Background: Among several, breast cancer has emerged as the most common female cancer in developing countries. It is the most common cause of cancer-related deaths worldwide among women. It is a molecularly and clinically heterogeneous disease. Moreover, it is a hormone–dependent tumor in which estrogens can regulate the growth of breast cells by binding with estrogen receptors (ERs). Moreover, the use of natural products in cancer therapeutics is due to their properties of biocompatibility and less toxicity. Plants are the vast reservoirs for various bioactive compounds. Coleus barbatus (Lamiaceae) contains anticancer properties against several cancer cell lines. Method: In the present study, an attempt is being made to enrich the knowledge of the anticancer activity of pure compounds extracted from Coleus barbatus (Andrew). On human breast cancer cell lines MCF-7. Here in, we are assessing the antiproliferative activity of Coleus barbatus (Andrew) plant extracts against MCF 7 and also evaluating their toxicity in normal human mammary cell lines such as Human Mammary Epithelial Cells (HMEC). The active fraction of plant extract was further purified with the help of Flash chromatography, Medium Pressure Liquid Chromatography (MPLC) and preparative High-Performance Liquid Chromatography (HPLC). The structure of pure compounds will be elucidated by using modern spectroscopic methods like Nuclear magnetic resonance (NMR), Electrospray Ionisation Mass Spectrometry (ESI-MS) methods. Later, the growth inhibition morphological assessment of cancer cells and cell cycle analysis of purified compounds were assessed using FACS. The growth and progression of signaling molecules HER2, GRP78 was studied by secretion assay using ELISA and expression analysis by flow cytometry. Result: Cytotoxic effect against MCF-7 with IC50 values were derived from dose response curves, using six concentrations of twofold serially diluted samples, by SOFTMax Pro software (Molecular device) and respectively Ellipticine and 0.5% DMSO were used as a positive and negative control. Conclusion: The present study shows the significance of various bioactive compounds extracted from Coleus barbatus (Andrew) root material. It acts as an anti-proliferative and shows cytotoxic effects on human breast cancer cell lines MCF7. The plant extracts play an important role pharmacologically. The whole plant has been used in traditional medicine for decades and the studies done have authenticated the practice. Earlier, as described, the plant has been used in the ayurveda and homeopathy medicine. However, more clinical and pathological studies must be conducted to investigate the unexploited potential of the plant. These studies will be very useful for drug designing in the future.Keywords: coleus barbatus, HPLC, MPLC, NMR, MCF7, flash chromatograph, ESI-MS, FACS, ELISA.
Procedia PDF Downloads 114310 Current Zonal Isolation Regulation and Standards: A Compare and Contrast Review in Plug and Abandonment
Authors: Z. A. Al Marhoon, H. S. Al Ramis, C. Teodoriu
Abstract:
Well-integrity is one of the major elements considered for drilling geothermal, oil, and gas wells. Well-integrity is minimizing the risk of unplanned fluid flow in the well bore throughout the well lifetime. Well integrity is maximized by applying technical concepts along with practical practices and strategic planning. These practices are usually governed by standardization and regulation entities. Practices during well construction can affect the integrity of the seal at the time of abandonment. On the other hand, achieving a perfect barrier system is impracticable due to the needed cost. This results in a needed balance between regulations requirements and practical applications. The guidelines are only effective when they are attainable in practical applications. Various governmental regulations and international standards have different guidelines on what constitutes high-quality isolation from unwanted flow. Each regulating or standardization body differ in requirements based on the abandonment objective. Some regulation account more for the environmental impact, water table contamination, and possible leaks. Other regulation might lean towards driving more economical benefits while achieving an acceptable isolation criteria. The research methodology used in this topic is derived from a literature review method combined with a compare and contrast analysis. The literature review on various zonal isolation regulations and standards has been conducted. A review includes guidelines from NORSOK (Norwegian governing entity), BSEE (USA offshore governing entity), API (American Petroleum Institute) combined with ISO (International Standardization Organization). The compare and contrast analysis is conducted by assessing the objective of each abandonment regulations and standardization. The current state of well barrier regulation is in balancing action. From one side of this balance, the environmental impact and complete zonal isolation is considered. The other side of the scale is practical application and associated cost. Some standards provide a fair amount of details concerning technical requirements and are often flexible with the needed associated cost. These guidelines cover environmental impact with laws that prevent major or disastrous environmental effects of improper sealing of wells. Usually these regulations are concerned with the near future of sealing rather than long-term. Consequently, applying these guidelines become more feasible from a cost point of view to the required plugging entities. On the other hand, other regulation have well integrity procedures and regulations that lean toward more restrictions environmentally with an increased associated cost requirements. The environmental impact is detailed and covered with its entirety, including medium to small environmental impact in barrier installing operations. Clear and precise attention to long-term leakage prevention is present in these regulations. The result of the compare and contrast analysis of the literature showed that there are various objectives that might tip the scale from one side of the balance (cost) to the other (sealing quality) especially in reference to zonal isolation. Furthermore, investing in initial well construction is a crucial part of ensuring safe final well abandonment. The safety and the cost saving at the end of the well life cycle is dependent upon a well-constructed isolation systems at the beginning of the life cycle. Long term studies on zonal isolation using various hydraulic or mechanical materials need to take place to further assess permanently abandoned wells to achieve the desired balance. Well drilling and isolation techniques will be more effective when they are operationally feasible and have reasonable associated cost to aid the local economy.Keywords: plug and abandon, P&A regulation, P&A standards, international guidelines, gap analysis
Procedia PDF Downloads 134309 Surface Roughness in the Incremental Forming of Drawing Quality Cold Rolled CR2 Steel Sheet
Authors: Zeradam Yeshiwas, A. Krishnaia
Abstract:
The aim of this study is to verify the resulting surface roughness of parts formed by the Single-Point Incremental Forming (SPIF) process for an ISO 3574 Drawing Quality Cold Rolled CR2 Steel. The chemical composition of drawing quality Cold Rolled CR2 steel is comprised of 0.12 percent of carbon, 0.5 percent of manganese, 0.035 percent of sulfur, 0.04 percent phosphorous, and the remaining percentage is iron with negligible impurities. The experiments were performed on a 3-axis vertical CNC milling machining center equipped with a tool setup comprising a fixture and forming tools specifically designed and fabricated for the process. The CNC milling machine was used to transfer the tool path code generated in Mastercam 2017 environment into three-dimensional motions by the linear incremental progress of the spindle. The blanks of Drawing Quality Cold Rolled CR2 steel sheets of 1 mm of thickness have been fixed along their periphery by a fixture and hardened high-speed steel (HSS) tools with a hemispherical tip of 8, 10 and 12mm of diameter were employed to fabricate sample parts. To investigate the surface roughness, hyperbolic-cone shape specimens were fabricated based on the chosen experimental design. The effect of process parameters on the surface roughness was studied using three important process parameters, i.e., tool diameter, feed rate, and step depth. In this study, the Taylor-Hobson Surtronic 3+ surface roughness tester profilometer was used to determine the surface roughness of the parts fabricated using the arithmetic mean deviation (Rₐ). In this instrument, a small tip is dragged across a surface while its deflection is recorded. Finally, the optimum process parameters and the main factor affecting surface roughness were found using the Taguchi design of the experiment and ANOVA. A Taguchi experiment design with three factors and three levels for each factor, the standard orthogonal array L9 (3³) was selected for the study using the array selection table. The lowest value of surface roughness is significant for surface roughness improvement. For this objective, the ‘‘smaller-the-better’’ equation was used for the calculation of the S/N ratio. The finishing roughness parameter Ra has been measured for the different process combinations. The arithmetic means deviation (Rₐ) was measured via the experimental design for each combination of the control factors by using Taguchi experimental design. Four roughness measurements were taken for a single component and the average roughness was taken to optimize the surface roughness. The lowest value of Rₐ is very important for surface roughness improvement. For this reason, the ‘‘smaller-the-better’’ Equation was used for the calculation of the S/N ratio. Analysis of the effect of each control factor on the surface roughness was performed with a ‘‘S/N response table’’. Optimum surface roughness was obtained at a feed rate of 1500 mm/min, with a tool radius of 12 mm, and with a step depth of 0.5 mm. The ANOVA result shows that step depth is an essential factor affecting surface roughness (91.1 %).Keywords: incremental forming, SPIF, drawing quality steel, surface roughness, roughness behavior
Procedia PDF Downloads 62308 Exploring Problem-Based Learning and University-Industry Collaborations for Fostering Students’ Entrepreneurial Skills: A Qualitative Study in a German Urban Setting
Authors: Eylem Tas
Abstract:
This empirical study aims to explore the development of students' entrepreneurial skills through problem-based learning within the context of university-industry collaborations (UICs) in curriculum co-design and co-delivery (CDD). The research question guiding this study is: "How do problem-based learning and university-industry collaborations influence the development of students' entrepreneurial skills in the context of curriculum co-design and co-delivery?” To address this question, the study was conducted in a big city in Germany and involved interviews with stakeholders from various industries, including the private sector, government agencies (govt), and non-governmental organizations (NGOs). These stakeholders had established collaborative partnerships with the targeted university for projects encompassing entrepreneurial development aspects in CDD. The study sought to gain insights into the intricacies and subtleties of UIC dynamics and their impact on fostering entrepreneurial skills. Qualitative content analysis, based on Mayring's guidelines, was employed to analyze the interview transcriptions. Through an iterative process of manual coding, 442 codes were generated, resulting in two main sections: "the role of problem-based learning and UIC in fostering entrepreneurship" and "challenges and requirements of problem-based learning within UIC for systematical entrepreneurship development.” The chosen experimental approach of semi-structured interviews was justified by its capacity to provide in-depth perspectives and rich data from stakeholders with firsthand experience in UICs in CDD. By enlisting participants with diverse backgrounds, industries, and company sizes, the study ensured a comprehensive and heterogeneous sample, enhancing the credibility of the findings. The first section of the analysis delved into problem-based learning and entrepreneurial self-confidence to gain a deeper understanding of UIC dynamics from an industry standpoint. It explored factors influencing problem-based learning, alignment of students' learning styles and preferences with the experiential learning approach, specific activities and strategies, and the role of mentorship from industry professionals in fostering entrepreneurial self-confidence. The second section focused on various interactions within UICs, including communication, knowledge exchange, and collaboration. It identified key elements, patterns, and dynamics of interaction, highlighting challenges and limitations. Additionally, the section emphasized success stories and notable outcomes related to UICs' positive impact on students' entrepreneurial journeys. Overall, this research contributes valuable insights into the dynamics of UICs and their role in fostering students' entrepreneurial skills. UICs face challenges in communication and establishing a common language. Transparency, adaptability, and regular communication are vital for successful collaboration. Realistic expectation management and clearly defined frameworks are crucial. Responsible data handling requires data assurance and confidentiality agreements, emphasizing the importance of trust-based relationships when dealing with data sharing and handling issues. The identified key factors and challenges provide a foundation for universities and industrial partners to develop more effective UIC strategies for enhancing students' entrepreneurial capabilities and preparing them for success in today's digital age labor market. The study underscores the significance of collaborative learning and transparent communication in UICs for entrepreneurial development in CDD.Keywords: collaborative learning, curriculum co-design and co-delivery, entrepreneurial skills, problem-based learning, university-industry collaborations
Procedia PDF Downloads 60307 Transnational Solidarity and Philippine Society: A Probe on Trafficked Filipinos and Economic Inequality
Authors: Shierwin Agagen Cabunilas
Abstract:
Countless Filipinos are reeling in dire economic inequality while many others are victims of human trafficking. Where there is extreme economic inequality, majority of the Filipinos are deprived of basic needs to have a good life, i.e., decent shelter, safe environment, food, quality education, social security, etc. The problem on human trafficking poses a scandal and threat in respect to human rights and dignity of a person on matters of sex, gender, ethnicity and race among others. The economic inequality and trafficking in persons are social pathologies that needed considerable amount of attention and visible solution both in the national and international level. However, the Philippine government seems falls short in terms of goals to lessen, if not altogether eradicate, the dire fate of many Filipinos. The lack of solidarity among Filipinos seems to further aggravate injustice and create hindrances to economic equity and protection of Filipinos from syndicated crimes, i.e., human trafficking. Indifference towards the welfare and well-being of the Filipino people trashes them into an unending cycle of marginalization and neglect. A transnational solidaristic action in response to these concerns is imperative. The subsequent sections will first discuss the notion of solidarity and the motivating factors for collective action. While solidarity has been previously thought of as stemming from and for one’s own community and people, it can be argued as a value that defies borders. Solidarity bridges peoples of diverse societies and cultures. Although there are limits to international interventions on another’s sovereignty, such as, internal political autonomy, transnational solidarity may not be an opposition to solidarity with people suffering injustices. Governments, nations and institutions can work together in securing justice. Solidarity thus is a positive political action that can best respond to issues of economic, class, racial and gender injustices. This is followed by a critical analysis of some data on Philippine economic inequality and human trafficking and link the place of transnational solidaristic arrangements. Here, the present work is interested on the normative aspect of the problem. It begins with the section on economic inequality and subsequently, human trafficking. It is argued that a transnational solidarity is vital in assisting the Philippine governing bodies and authorities to seriously execute innovative economic policies and developmental programs that are justice and egalitarian oriented. Transnational solidarity impacts a corrective measure in the economic practices, and activities of the Philippine government. Moreover, it is suggested that in order to mitigate Philippine economic inequality and human trafficking concerns it involves a (a) historical analysis of systems that brought about economic anomalies, (b) renewed and innovated economic policies, (c) mutual trust and relatively high transparency, and (d) grass-root and context-based approach. In conclusion, the findings are briefly sketched and integrated in an optimistic view that transnational solidarity is capable of influencing Philippine governing bodies towards socio-economic transformation and development of the lives of Filipinos.Keywords: Philippines, Filipino, economic inequality, human trafficking, transnational solidarity
Procedia PDF Downloads 281306 Evaluation of Feasibility of Ecological Sanitation in Central Nepal
Authors: K. C. Sharda
Abstract:
Introduction: In the world, almost half of the population are lacking proper access to improved sanitation services. In Nepal, large number of people are living without access to any sanitation facility. Ecological sanitation toilet which is defined as water conserving and nutrient recycling system for use of human urine and excreta in agriculture would count a lot to utilize locally available resources, to regenerate soil fertility, to save national currency and to achieve the goal of elimination open defecation in country like Nepal. The objectives of the research were to test the efficacy of human urine for improving crop performance and to evaluate the feasibility of ecological sanitation in rural area of Central Nepal. Materials and Methods: The field investigation was carried out at Palung Village Development Committee (VDC) of Makawanpur District, Nepal from March – August, 2016. Five eco-san toilets in two villages (Angare and Bhot Khoriya) were constructed and questionnaire survey was carried out. During the questionnaire survey, respondents were asked about socio-economic parameters, farming practices, awareness of ecological sanitation and fertilizer value of human urine and excreta in agriculture. In prior to a field experiment, soil was sampled for analysis of basic characteristics. In the field experiment, cauliflower was cultivated for a month in the two sites to compare the fertilizer value of urine with chemical fertilizer and no fertilizer with three replications. The harvested plant samples were analyzed to understand the nutrient content in plant with different treatments. Results and Discussion: Eighty three percent respondents were engaged in agriculture growing mainly vegetables, which may raise the feasibility of ecological sanitation. In the study area, water deficiencies in dry season, high demand of chemical fertilizer, lack of sanitation awareness were found to be solved. The soil at Angare has sandier texture and lower nitrogen content compared to that in Bhot Khoriya. While the field experiment in Angare showed that the aboveground biomass of cauliflower in the urine fertilized plot were similar with that in the chemically fertilized plot and higher than those in the non-fertilized plots, no significant difference among the treatments were found in Bhot Khoriya. The more distinctive response of crop growth to the three treatments in the former might be attributed to the poorer soil productivity, which in turn could be caused by the poorer inherent soil fertility and the poorer past management by the farmer in Angare. Thus, use of urine as fertilizer could help poor farmers with low quality soil. The significantly different content of nitrogen and potassium in the plant samples among three treatments in Bhot Khoriya would require further investigation. When urine is utilized as a fertilizer, the productivity could be increased and the money to buy chemical fertilizer would be utilized in other livelihood activities. Ecological sanitation is feasible in the area with similar socio-economic parameter.Keywords: cauliflower, chemical fertilizer, ecological sanitation, Nepal, urine
Procedia PDF Downloads 358305 Utilizing Quick Response (QR) Codes and Uniform Resource Locator (URL) links in Printed Discharge Instructions for Chronic Pain Patients
Authors: Jawad Arshad
Abstract:
Back and neck pain result in more than 3 million Emergency Department (ED) visits per year. Approximately 10-20% of patients with acute low back pain will continue to have moderate to severe pain after 3 months and 30% will develop chronic functional impairment. Strategies for analgesia for musculoskeletal pain upon ED discharge have been shown to be highly variable and provider dependent. The American College of Physicians, United Kingdom and Danish treatment guidelines all recommend exercise as the first line treatment for low back pain present >12 weeks and as an adjunctive treatment to education and encouraging continued mobility in acute low back pain. URL links and QR codes were embedded in Epic’s Electronic Medical Record (EMR) for seamless integration into discharge instructions. To our knowledge, this is the first time QR codes have been embedded and used as an adjunct to computer generated discharge instructions into an EMR nationwide. Materials and Methods: Eleven two-minute physical therapy (PT) videos treating a variety of common painful musculoskeletal conditions were prepared in collaboration between the Pain Management and Physical Therapy departments of a large academic center. The videos were hosted and made available on Vimeo for free public access. Dynamic QR codes and URL links were created for each video and integrated into the EPIC Electronic Medical Record discharge instructions. Both data from physician prescribed PT videos and results of public searches are included. As the case report is devoid of patient identifiable information, it is exempt from IRB review requirements as per IRB policy. Results/Case Report: From 2/6/2021 to 5/19/21, the videos were prescribed on Epic a total of 76 times, the dynamic QR codes were scanned a total of 84 times. Low back (n=32) followed by neck (n=8), shoulder (n=7) and radicular low back pain (n=7) were the most common painful complaints prescribed a PT video. The videos were played on Vimeo 790 times with 83% (n=656) being unique views and 17% returning users. Shoulder pain (n=489) followed by neck (n=36) and low back pain (n=37) were the most commonly played videos. Discussion: Patients are viewing their physical therapy videos via QR codes provided by their physicians. The large number of public views of the physical therapy videos are either from patients sharing the link/QR code or patients not affiliated with our institution are searching for videos to treat their painful complaints. The data suggests there is a need for reputable home PT videos to treat commonly encountered painful complaints with 17% of viewers returning for repeat view. The shoulder pain video was played significantly more than the others despite not being given to patients more frequently, which raised suspicion it may have received increased traffic via an outside affiliate, website or influencer. When eliminating shoulder pain from the data, individuals watched 67% of the entire video on average, indicating high engagement.Keywords: pain, EMR, emergency medicine, discharge
Procedia PDF Downloads 5304 Linear Evolution of Compressible Görtler Vortices Subject to Free-Stream Vortical Disturbances
Authors: Samuele Viaro, Pierre Ricco
Abstract:
Görtler instabilities generate in boundary layers from an unbalance between pressure and centrifugal forces caused by concave surfaces. Their spatial streamwise evolution influences transition to turbulence. It is therefore important to understand even the early stages where perturbations, still small, grow linearly and could be controlled more easily. This work presents a rigorous theoretical framework for compressible flows using the linearized unsteady boundary region equations, where only the streamwise pressure gradient and streamwise diffusion terms are neglected from the full governing equations of fluid motion. Boundary and initial conditions are imposed through an asymptotic analysis in order to account for the interaction of the boundary layer with free-stream turbulence. The resulting parabolic system is discretize with a second-order finite difference scheme. Realistic flow parameters are chosen from wind tunnel studies performed at supersonic and subsonic conditions. The Mach number ranges from 0.5 to 8, with two different radii of curvature, 5 m and 10 m, frequencies up to 2000 Hz, and vortex spanwise wavelengths from 5 mm to 20 mm. The evolution of the perturbation flow is shown through velocity, temperature, pressure profiles relatively close to the leading edge, where non-linear effects can still be neglected, and growth rate. Results show that a global stabilizing effect exists with the increase of Mach number, frequency, spanwise wavenumber and radius of curvature. In particular, at high Mach numbers curvature effects are less pronounced and thermal streaks become stronger than velocity streaks. This increase of temperature perturbations saturates at approximately Mach 4 flows, and is limited in the early stage of growth, near the leading edge. In general, Görtler vortices evolve closer to the surface with respect to a flat plate scenario but their location shifts toward the edge of the boundary layer as the Mach number increases. In fact, a jet-like behavior appears for steady vortices having small spanwise wavelengths (less than 10 mm) at Mach 8, creating a region of unperturbed flow close to the wall. A similar response is also found at the highest frequency considered for a Mach 3 flow. Larger vortices are found to have a higher growth rate but are less influenced by the Mach number. An eigenvalue approach is also employed to study the amplification of the perturbations sufficiently downstream from the leading edge. These eigenvalue results are compared with the ones obtained through the initial value approach with inhomogeneous free-stream boundary conditions. All of the parameters here studied have a significant influence on the evolution of the instabilities for the Görtler problem which is indeed highly dependent on initial conditions.Keywords: compressible boundary layers, Görtler instabilities, receptivity, turbulence transition
Procedia PDF Downloads 254303 Induction Machine Design Method for Aerospace Starter/Generator Applications and Parametric FE Analysis
Authors: Wang Shuai, Su Rong, K. J.Tseng, V. Viswanathan, S. Ramakrishna
Abstract:
The More-Electric-Aircraft concept in aircraft industry levies an increasing demand on the embedded starter/generators (ESG). The high-speed and high-temperature environment within an engine poses great challenges to the operation of such machines. In view of such challenges, squirrel cage induction machines (SCIM) have shown advantages due to its simple rotor structure, absence of temperature-sensitive components as well as low torque ripples etc. The tight operation constraints arising from typical ESG applications together with the detailed operation principles of SCIMs have been exploited to derive the mathematical interpretation of the ESG-SCIM design process. The resultant non-linear mathematical treatment yielded unique solution to the SCIM design problem for each configuration of pole pair number p, slots/pole/phase q and conductors/slot zq, easily implemented via loop patterns. It was also found that not all configurations led to feasible solutions and corresponding observations have been elaborated. The developed mathematical procedures also proved an effective framework for optimization among electromagnetic, thermal and mechanical aspects by allocating corresponding degree-of-freedom variables. Detailed 3D FEM analysis has been conducted to validate the resultant machine performance against design specifications. To obtain higher power ratings, electrical machines often have to increase the slot areas for accommodating more windings. Since the available space for embedding such machines inside an engine is usually short in length, axial air gap arrangement appears more appealing compared to its radial gap counterpart. The aforementioned approach has been adopted in case studies of designing series of AFIMs and RFIMs respectively with increasing power ratings. Following observations have been obtained. Under the strict rotor diameter limitation AFIM extended axially for the increased slot areas while RFIM expanded radially with the same axial length. Beyond certain power ratings AFIM led to long cylinder geometry while RFIM topology resulted in the desired short disk shape. Besides the different dimension growth patterns, AFIMs and RFIMs also exhibited dissimilar performance degradations regarding power factor, torque ripples as well as rated slip along with increased power ratings. Parametric response curves were plotted to better illustrate the above influences from increased power ratings. The case studies may provide a basic guideline that could assist potential users in making decisions between AFIM and RFIM for relevant applications.Keywords: axial flux induction machine, electrical starter/generator, finite element analysis, squirrel cage induction machine
Procedia PDF Downloads 457302 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management
Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro
Abstract:
This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R² between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. The study has some limitations, such as the use of a single ANN architecture and two optimizers, the validation with data from only one producer, and the possible underestimation of the influence of seasonality and local climate variability. An irrigation management application using the most efficient models from this study is already under development. Future research can explore different ANN architectures and optimization techniques, validate models with data from multiple producers and regions, and investigate the model's response to different seasonal and climatic conditions.Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization
Procedia PDF Downloads 51301 The Semiotics of Soft Power; An Examination of the South Korean Entertainment Industry
Authors: Enya Trenholm-Jensen
Abstract:
This paper employs various semiotic methodologies to examine the mechanism of soft power. Soft power refers to a country’s global reputation and their ability to leverage that reputation to achieve certain aims. South Korea has invested heavily in their soft power strategy for a multitude of predominantly historical and geopolitical reasons. On account of this investment and the global prominence of their strategy, South Korea was considered to be the optimal candidate for the aims of this investigation. Having isolated the entertainment industry as one of the most heavily funded segments of the South Korean soft power strategy, the analysis restricted itself to this sector. Within this industry, two entertainment products were selected as case studies. The case studies were chosen based on commercial success according to metrics such as streams, purchases, and subsequent revenue. This criterion was deemed to be the most objective and verifiable indicator of the products general appeal. The entertainment products which met the chosen criterion were Netflix’ “Squid Game” and BTS’ hit single “Butter”. The methodologies employed were chosen according to the medium of the entertainment products. For “Squid Game,” an aesthetic analysis was carried out to investigate how multi- layered meanings were mobilized in a show popularized by its visual grammar. To examine “Butter”, both music semiology and linguistic analysis were employed. The music section featured an analysis underpinned by denotative and connotative music semiotic theories borrowing from scholars Theo van Leeuwen and Martin Irvine. The linguistic analysis focused on stance and semantic fields according to scholarship by George Yule and John W. DuBois. The aesthetic analysis of the first case study revealed intertextual references to famous artworks, which served to augment the emotional provocation of the Squid Game narrative. For the second case study, the findings exposed a set of musical meaning units arranged in a patchwork of familiar and futuristic elements to achieve a song that existed on the boundary between old and new. The linguistic analysis of the song’s lyrics found a deceptively innocuous surface level meaning that bore implications for authority, intimacy, and commercial success. Whether through means of visual metaphor, embedded auditory associations, or linguistic subtext, the collective findings of the three analyses exhibited a desire to conjure a form of positive arousal in the spectator. In the synthesis section, this process is likened to that of branding. Through an exploration of branding, the entertainment products can be understood as cogs in a larger operation aiming to create positive associations to Korea as a country and a concept. Limitations in the form of a timeframe biased perspective are addressed, and directions for future research are suggested. This paper employs semiotic methodologies to examine two entertainment products as mechanisms of soft power. Through means of visual metaphor, embedded auditory associations, or linguistic subtext, the findings reveal a desire to conjure positive arousal in the spectator. The synthesis finds similarities to branding, thus positioning the entertainment products as cogs in a larger operation aiming to create positive associations to Korea as a country and a concept.Keywords: BTS, cognitive semiotics, entertainment, soft power, south korea, squid game
Procedia PDF Downloads 154300 Coping Strategies and Characterization of Vulnerability in the Perspective of Climate Change
Authors: Muhammad Umer Mehmood, Muhammad Luqman, Muhammad Yaseen, Imtiaz Hussain
Abstract:
Climate change is an arduous fact, which could not be unheeded easily. It is a phenomenon which has brought a collection of challenges for the mankind. Scientists have found many of its negative impacts on the life of human being and the resources on which the life of humanity is dependent. There are many issues which are associated with the factor of prime importance in this study, 'climate change'. Whenever changes happen in nature, they strike the whole globe. Effects of these changes vary from region to region. Climate of every region of this globe is different from the other. Even within a state, country or the province has different climatic conditions. So it is mandatory that the response in that specific region and the coping strategy of this specific region should be according to the prevailing risk. In the present study, the objective was to assess the coping strategies and vulnerability of small landholders. So that a professional suggestion could be made to cope with the vulnerability factor of small farmers. The cross-sectional research design was used with the intervention of quantitative approach. The study was conducted in the Khanewal district, of Punjab, Pakistan. 120 small farmers were interviewed after randomized sampling from the population of respective area. All respondents were above the age of 15 years. A questionnaire was developed after keen observation of facts in the respective area. Content and face validity of the instrument was assessed with SPSS and experts in the field. Data were analyzed through SPSS using descriptive statistics. From the sample of 120, 81.67% of the respondents claimed that the environment is getting warmer and not fit for their present agricultural practices. 84.17% of the sample expressed serious concern that they are disturbed due to change in rainfall pattern and vulnerability towards the climatic effects. On the other hand, they expressed that they are not good at tackling the effects of climate change. Adaptation of coping strategies like change in cropping pattern, use of resistant varieties, varieties with minimum water requirement, intercropping and tree planting was low by more than half of the sample. From the sample 63.33% small farmers said that the coping strategies they adopt are not effective enough. The present study showed that subsistence farming, lack of marketing and overall infrastructure, lack of access to social security networks, limited access to agriculture extension services, inappropriate access to agrometeorological system, unawareness and access to scientific development and low crop yield are the prominent factors which are responsible for the vulnerability of small farmers. A comprehensive study should be conducted at national level so that a national policy could be formulated to cope with the dilemma in future with relevance to climate change. Mainstreaming and collaboration among the researchers and academicians could prove beneficiary in this regard the interest of national leaders’ does matter. Proper policies to avoid the vulnerability factors should be the top priority. The world is taking up this issue with full responsibility as should we, keeping in view the local situation.Keywords: adaptation, coping strategies, climate change, Pakistan, small farmers, vulnerability
Procedia PDF Downloads 144299 Using Differentiated Instruction Applying Cognitive Approaches and Strategies for Teaching Diverse Learners
Authors: Jolanta Jonak, Sylvia Tolczyk
Abstract:
Educational systems are tasked with preparing students for future success in academic or work environments. Schools strive to achieve this goal, but often it is challenging as conventional teaching approaches are often ineffective in increasingly diverse educational systems. In today’s ever-increasing global society, educational systems become increasingly diverse in terms of cultural and linguistic differences, learning preferences and styles, ability and disability. Through increased understanding of disabilities and improved identification processes, students having some form of disabilities tend to be identified earlier than in the past, meaning that more students with identified disabilities are being supported in our classrooms. Also, a large majority of students with disabilities are educated in general education environments. Due to cognitive makeup and life experiences, students have varying learning styles and preferences impacting how they receive and express what they are learning. Many students come from bi or multilingual households and with varying proficiencies in the English language, further impacting their learning. All these factors need to be seriously considered when developing learning opportunities for student's. Educators try to adjust their teaching practices as they discover that conventional methods are often ineffective in reaching each student’s potential. Many teachers do not have the necessary educational background or training to know how to teach students whose learning needs are more unique and may vary from the norm. This is further complicated by the fact that many classrooms lack consistent access to interventionists/coaches that are adequately trained in evidence-based approaches to meet the needs of all students, regardless of what their academic needs may be. One evidence-based way for providing successful education for all students is by incorporating cognitive approaches and strategies that tap into affective, recognition, and strategic networks in the student's brain. This can be done through Differentiated Instruction (DI). Differentiated Instruction is increasingly recognized model that is established on the basic principles of Universal Design for Learning. This form of support ensures that regardless of the students’ learning preferences and cognitive learning profiles, they have opportunities to learn through approaches that are suitable to their needs. This approach improves the educational outcomes of students with special needs and it benefits other students as it accommodates learning styles as well as the scope of unique learning needs that are evident in the typical classroom setting. Differentiated Instruction also is recognized as an evidence-based best practice in education and is highly effective when it is implemented within the tiered system of the Response to Intervention (RTI) model. Recognition of DI becomes more common; however, there is still limited understanding of the effective implementation and use of strategies that can create unique learning environments for each student within the same setting. Through employing knowledge of a variety of instructional strategies, general and special education teachers can facilitate optimal learning for all students, with and without a disability. A desired byproduct of DI is that it can eliminate inaccurate perceptions about the students’ learning abilities, unnecessary referrals for special education evaluations, and inaccurate decisions about the presence of a disability.Keywords: differentiated instruction, universal design for learning, special education, diversity
Procedia PDF Downloads 222298 A Comparative Evaluation of Cognitive Load Management: Case Study of Postgraduate Business Students
Authors: Kavita Goel, Donald Winchester
Abstract:
In a world of information overload and work complexities, academics often struggle to create an online instructional environment enabling efficient and effective student learning. Research has established that students’ learning styles are different, some learn faster when taught using audio and visual methods. Attributes like prior knowledge and mental effort affect their learning. ‘Cognitive load theory’, opines learners have limited processing capacity. Cognitive load depends on the learner’s prior knowledge, the complexity of content and tasks, and instructional environment. Hence, the proper allocation of cognitive resources is critical for students’ learning. Consequently, a lecturer needs to understand the limits and strengths of the human learning processes, various learning styles of students, and accommodate these requirements while designing online assessments. As acknowledged in the cognitive load theory literature, visual and auditory explanations of worked examples potentially lead to a reduction of cognitive load (effort) and increased facilitation of learning when compared to conventional sequential text problem solving. This will help learner to utilize both subcomponents of their working memory. Instructional design changes were introduced at the case site for the delivery of the postgraduate business subjects. To make effective use of auditory and visual modalities, video recorded lectures, and key concept webinars were delivered to students. Videos were prepared to free up student limited working memory from irrelevant mental effort as all elements in a visual screening can be viewed simultaneously, processed quickly, and facilitates greater psychological processing efficiency. Most case study students in the postgraduate programs are adults, working full-time at higher management levels, and studying part-time. Their learning style and needs are different from other tertiary students. The purpose of the audio and visual interventions was to lower the students cognitive load and provide an online environment supportive to their efficient learning. These changes were expected to impact the student’s learning experience, their academic performance and retention favourably. This paper posits that these changes to instruction design facilitates students to integrate new knowledge into their long-term memory. A mixed methods case study methodology was used in this investigation. Primary data were collected from interviews and survey(s) of students and academics. Secondary data were collected from the organisation’s databases and reports. Some evidence was found that the academic performance of students does improve when new instructional design changes are introduced although not statistically significant. However, the overall grade distribution of student’s academic performance has changed and skewed higher which shows deeper understanding of the content. It was identified from feedback received from students that recorded webinars served as better learning aids than material with text alone, especially with more complex content. The recorded webinars on the subject content and assessments provides flexibility to students to access this material any time from repositories, many times, and this enhances students learning style. Visual and audio information enters student’s working memory more effectively. Also as each assessment included the application of the concepts, conceptual knowledge interacted with the pre-existing schema in the long-term memory and lowered student’s cognitive load.Keywords: cognitive load theory, learning style, instructional environment, working memory
Procedia PDF Downloads 146297 Multiaxial Stress Based High Cycle Fatigue Model for Adhesive Joint Interfaces
Authors: Martin Alexander Eder, Sergei Semenov
Abstract:
Many glass-epoxy composite structures, such as large utility wind turbine rotor blades (WTBs), comprise of adhesive joints with typically thick bond lines used to connect the different components during assembly. Performance optimization of rotor blades to increase power output by simultaneously maintaining high stiffness-to-low-mass ratios entails intricate geometries in conjunction with complex anisotropic material behavior. Consequently, adhesive joints in WTBs are subject to multiaxial stress states with significant stress gradients depending on the local joint geometry. Moreover, the dynamic aero-elastic interaction of the WTB with the airflow generates non-proportional, variable amplitude stress histories in the material. Empiricism shows that a prominent failure type in WTBs is high cycle fatigue failure of adhesive bond line interfaces, which in fact over time developed into a design driver as WTB sizes increase rapidly. Structural optimization employed at an early design stage, therefore, sets high demands on computationally efficient interface fatigue models capable of predicting the critical locations prone for interface failure. The numerical stress-based interface fatigue model presented in this work uses the Drucker-Prager criterion to compute three different damage indices corresponding to the two interface shear tractions and the outward normal traction. The two-parameter Drucker-Prager model was chosen because of its ability to consider shear strength enhancement under compression and shear strength reduction under tension. The governing interface damage index is taken as the maximum of the triple. The damage indices are computed through the well-known linear Palmgren-Miner rule after separate rain flow-counting of the equivalent shear stress history and the equivalent pure normal stress history. The equivalent stress signals are obtained by self-similar scaling of the Drucker-Prager surface whose shape is defined by the uniaxial tensile strength and the shear strength such that it intersects with the stress point at every time step. This approach implicitly assumes that the damage caused by the prevailing multiaxial stress state is the same as the damage caused by an amplified equivalent uniaxial stress state in the three interface directions. The model was implemented as Python plug-in for the commercially available finite element code Abaqus for its use with solid elements. The model was used to predict the interface damage of an adhesively bonded, tapered glass-epoxy composite cantilever I-beam tested by LM Wind Power under constant amplitude compression-compression tip load in the high cycle fatigue regime. Results show that the model was able to predict the location of debonding in the adhesive interface between the webfoot and the cap. Moreover, with a set of two different constant life diagrams namely in shear and tension, it was possible to predict both the fatigue lifetime and the failure mode of the sub-component with reasonable accuracy. It can be concluded that the fidelity, robustness and computational efficiency of the proposed model make it especially suitable for rapid fatigue damage screening of large 3D finite element models subject to complex dynamic load histories.Keywords: adhesive, fatigue, interface, multiaxial stress
Procedia PDF Downloads 170296 Assessment of Very Low Birth Weight Neonatal Tracking and a High-Risk Approach to Minimize Neonatal Mortality in Bihar, India
Authors: Aritra Das, Tanmay Mahapatra, Prabir Maharana, Sridhar Srikantiah
Abstract:
In the absence of adequate well-equipped neonatal-care facilities serving rural Bihar, India, the practice of essential home-based newborn-care remains critically important for reduction of neonatal and infant mortality, especially among pre-term and small-for-gestational-age (Low-birth-weight) newborns. To improve the child health parameters in Bihar, ‘Very-Low-Birth-Weight (vLBW) Tracking’ intervention is being conducted by CARE India, since 2015, targeting public facility-delivered newborns weighing ≤2000g at birth, to improve their identification and provision of immediate post-natal care. To assess the effectiveness of the intervention, 200 public health facilities were randomly selected from all functional public-sector delivery points in Bihar and various outcomes were tracked among the neonates born there. Thus far, one pre-intervention (Feb-Apr’2015-born neonates) and three post-intervention (for Sep-Oct’2015, Sep-Oct’2016 and Sep-Oct’2017-born children) follow-up studies were conducted. In each round, interviews were conducted with the mothers/caregivers of successfully-tracked children to understand outcome, service-coverage and care-seeking during the neonatal period. Data from 171 matched facilities common across all rounds were analyzed using SAS-9.4. Identification of neonates with birth-weight ≤ 2000g improved from 2% at baseline to 3.3%-4% during post-intervention. All indicators pertaining to post-natal home-visits by frontline-workers (FLWs) improved. Significant improvements between baseline and post-intervention rounds were also noted regarding mothers being informed about ‘weak’ child – at the facility (R1 = 25 to R4 = 50%) and at home by FLW (R1 = 19%, to R4 = 30%). Practice of ‘Kangaroo-Mother-Care (KMC)’– an important component of essential newborn care – showed significant improvement in postintervention period compared to baseline in both facility (R1 = 15% to R4 = 31%) and home (R1 = 10% to R4=29%). Increasing trend was noted regarding detection and birth weight-recording of the extremely low-birth-weight newborns (< 1500 g) showed an increasing trend. Moreover, there was a downward trend in mortality across rounds, in each birth-weight strata (< 1500g, 1500-1799g and >= 1800g). After adjustment for the differential distribution of birth-weights, mortality was found to decline significantly from R1 (22.11%) to R4 (11.87%). Significantly declining trend was also observed for both early and late neonatal mortality and morbidities. Multiple regression analysis identified - birth during immediate post-intervention phase as well as that during the maintenance phase, birth weight > 1500g, children of low-parity mothers, receiving visit from FLW in the first week and/or receiving advice on extra care from FLW as predictors of survival during neonatal period among vLBW newborns. vLBW tracking was found to be a successful and sustainable intervention and has already been handed over to the Government.Keywords: weak newborn tracking, very low birth weight babies, newborn care, community response
Procedia PDF Downloads 162295 Using Virtual Reality Exergaming to Improve Health of College Students
Authors: Juanita Wallace, Mark Jackson, Bethany Jurs
Abstract:
Introduction: Exergames, VR games used as a form of exercise, are being used to reduce sedentary lifestyles in a vast number of populations. However, there is a distinct lack of research comparing the physiological response during VR exergaming to that of traditional exercises. The purpose of this study was to create a foundationary investigation establishing changes in physiological responses resulting from VR exergaming in a college aged population. Methods: In this IRB approved study, college aged students were recruited to play a virtual reality exergame (Beat Saber) on the Oculus Quest 2 (Facebook, 2021) in either a control group (CG) or training group (TG). Both groups consisted of subjects who were not habitual users of virtual reality. The CG played VR one time per week for three weeks and the TG played 150 min/week three weeks. Each group played the same nine Beat Saber songs, in a randomized order, during 30 minute sessions. Song difficulty was increased during play based on song performance. Subjects completed a pre- and posttests at which the following was collected: • Beat Saber Game Metrics: song level played, song score, number of beats completed per song and accuracy (beats completed/total beats) • Physiological Data: heart rate (max and avg.), active calories • Demographics Results: A total of 20 subjects completed the study; nine in the CG (3 males, 6 females) and 11 (5 males, 6 females) in the TG. • Beat Saber Song Metrics: The TG improved performance from a normal/hard difficulty to hard/expert. The CG stayed at the normal/hard difficulty. At the pretest there was no difference in game accuracy between groups. However, at the posttest the CG had a higher accuracy. • Physiological Data (Table 1): Average heart rates were similar between the TG and CG at both the pre- and posttest. However, the TG expended more total calories. Discussion: Due to the lack of peer reviewed literature on c exergaming using Beat Saber, the results of this study cannot be directly compared. However, the results of this study can be compared with the previously established trends for traditional exercise. In traditional exercise, an increase in training volume equates to increased efficiency at the activity. The TG should naturally increase in difficulty at a faster rate than the CG because they played 150 hours per week. Heart rate and caloric responses also increase during traditional exercise as load increases (i.e. speed or resistance). The TG reported an increase in total calories due to a higher difficulty of play. The song accuracy decreases in the TG can be explained by the increased difficulty of play. Conclusion: VR exergaming is comparable to traditional exercise for loads within the 50-70% of maximum heart rate. The ability to use VR for health could motivate individuals who do not engage in traditional exercise. In addition, individuals in health professions can and should promote VR exergaming as a viable way to increase physical activity and improve health in their clients/patients.Keywords: virtual reality, exergaming, health, heart rate, wellness
Procedia PDF Downloads 188294 Development of an Automatic Control System for ex vivo Heart Perfusion
Authors: Pengzhou Lu, Liming Xin, Payam Tavakoli, Zhonghua Lin, Roberto V. P. Ribeiro, Mitesh V. Badiwala
Abstract:
Ex vivo Heart Perfusion (EVHP) has been developed as an alternative strategy to expand cardiac donation by enabling resuscitation and functional assessment of hearts donated from marginal donors, which were previously not accepted. EVHP parameters, such as perfusion flow (PF) and perfusion pressure (PP) are crucial for optimal organ preservation. However, with the heart’s constant physiological changes during EVHP, such as coronary vascular resistance, manual control of these parameters is rendered imprecise and cumbersome for the operator. Additionally, low control precision and the long adjusting time may lead to irreversible damage to the myocardial tissue. To solve this problem, an automatic heart perfusion system was developed by applying a Human-Machine Interface (HMI) and a Programmable-Logic-Controller (PLC)-based circuit to control PF and PP. The PLC-based control system collects the data of PF and PP through flow probes and pressure transducers. It has two control modes: the RPM-flow mode and the pressure mode. The RPM-flow control mode is an open-loop system. It influences PF through providing and maintaining the desired speed inputted through the HMI to the centrifugal pump with a maximum error of 20 rpm. The pressure control mode is a closed-loop system where the operator selects a target Mean Arterial Pressure (MAP) to control PP. The inputs of the pressure control mode are the target MAP, received through the HMI, and the real MAP, received from the pressure transducer. A PID algorithm is applied to maintain the real MAP at the target value with a maximum error of 1mmHg. The precision and control speed of the RPM-flow control mode were examined by comparing the PLC-based system to an experienced operator (EO) across seven RPM adjustment ranges (500, 1000, 2000 and random RPM changes; 8 trials per range) tested in a random order. System’s PID algorithm performance in pressure control was assessed during 10 EVHP experiments using porcine hearts. Precision was examined through monitoring the steady-state pressure error throughout perfusion period, and stabilizing speed was tested by performing two MAP adjustment changes (4 trials per change) of 15 and 20mmHg. A total of 56 trials were performed to validate the RPM-flow control mode. Overall, the PLC-based system demonstrated the significantly faster speed than the EO in all trials (PLC 1.21±0.03, EO 3.69±0.23 seconds; p < 0.001) and greater precision to reach the desired RPM (PLC 10±0.7, EO 33±2.7 mean RPM error; p < 0.001). Regarding pressure control, the PLC-based system has the median precision of ±1mmHg error and the median stabilizing times in changing 15 and 20mmHg of MAP are 15 and 19.5 seconds respectively. The novel PLC-based control system was 3 times faster with 60% less error than the EO for RPM-flow control. In pressure control mode, it demonstrates a high precision and fast stabilizing speed. In summary, this novel system successfully controlled perfusion flow and pressure with high precision, stability and a fast response time through a user-friendly interface. This design may provide a viable technique for future development of novel heart preservation and assessment strategies during EVHP.Keywords: automatic control system, biomedical engineering, ex-vivo heart perfusion, human-machine interface, programmable logic controller
Procedia PDF Downloads 175293 Approaches to Inducing Obsessional Stress in Obsessive-Compulsive Disorder (OCD): An Empirical Study with Patients Undergoing Transcranial Magnetic Stimulation (TMS) Therapy
Authors: Lucia Liu, Matthew Koziol
Abstract:
Obsessive-compulsive disorder (OCD), a long-lasting anxiety disorder involving recurrent, intrusive thoughts, affects over 2 million adults in the United States. Transcranial magnetic stimulation (TMS) stands out as a noninvasive, cutting-edge therapy that has been shown to reduce symptoms in patients with treatment-resistant OCD. The Food and Drug Administration (FDA) approved protocol pairs TMS sessions with individualized symptom provocation, aiming to improve the susceptibility of brain circuits to stimulation. However, limited standardization or guidance exists on how to conduct symptom provocation and which methods are most effective. This study aims to compare the effect of internal versus external techniques to induce obsessional stress in a clinical setting during TMS therapy. Two symptom provocation methods, (i) Asking patients thought-provoking questions about their obsessions (internal) and (ii) Requesting patients to perform obsession-related tasks (external), were employed in a crossover design with repeated measurement. Thirty-six treatments of NeuroStar TMS were administered to each of two patients over 8 weeks in an outpatient clinic. Patient One received 18 sessions of internal provocation followed by 18 sessions of external provocation, while Patient Two received 18 sessions of external provocation followed by 18 sessions of internal provocation. The primary outcome was the level of self-reported obsessional stress on a visual analog scale from 1 to 10. The secondary outcome was self-reported OCD severity, collected biweekly in a four-level Likert-scale (1 to 4) of bad, fair, good and excellent. Outcomes were compared and tested between provocation arms through repeated measures ANOVA, accounting for intra-patient correlations. Ages were 42 for Patient One (male, White) and 57 for Patient Two (male, White). Both patients had similar moderate symptoms at baseline, as determined through the Yale-Brown Obsessive Compulsive Scale (YBOCS). When comparing obsessional stress induced across the two arms of internal and external provocation methods, the mean (SD) was 6.03 (1.18) for internal and 4.01 (1.28) for external strategies (P=0.0019); ranges were 3 to 8 for internal and 2 to 8 for external strategies. Internal provocation yielded 5 (31.25%) bad, 6 (33.33%) fair, 3 (18.75%) good, and 2 (12.5%) excellent responses for OCD status, while external provocation yielded 5 (31.25%) bad, 9 (56.25%) fair, 1 (6.25%) good, and 1 (6.25%) excellent responses (P=0.58). Internal symptom provocation tactics had a significantly stronger impact on inducing obsessional stress and led to better OCD status (non-significant). This could be attributed to the fact that answering questions may prompt patients to reflect more on their lived experiences and struggles with OCD. In the future, clinical trials with larger sample sizes are warranted to validate this finding. Results support the increased integration of internal methods into structured provocation protocols, potentially reducing the time required for provocation and achieving greater treatment response to TMS.Keywords: obsessive-compulsive disorder, transcranial magnetic stimulation, mental health, symptom provocation
Procedia PDF Downloads 57292 Auditory Rehabilitation via an VR Serious Game for Children with Cochlear Implants: Bio-Behavioral Outcomes
Authors: Areti Okalidou, Paul D. Hatzigiannakoglou, Aikaterini Vatou, George Kyriafinis
Abstract:
Young children are nowadays adept at using technology. Hence, computer-based auditory training programs (CBATPs) have become increasingly popular in aural rehabilitation for children with hearing loss and/or with cochlear implants (CI). Yet, their clinical utility for prognostic, diagnostic, and monitoring purposes has not been explored. The purposes of the study were: a) to develop an updated version of the auditory rehabilitation tool for Greek-speaking children with cochlear implants, b) to develop a database for behavioral responses, and c) to compare accuracy rates and reaction times in children differing in hearing status and other medical and demographic characteristics, in order to assess the tool’s clinical utility in prognosis, diagnosis, and progress monitoring. The updated version of the auditory rehabilitation tool was developed on a tablet, retaining the User-Centered Design approach and the elements of the Virtual Reality (VR) serious game. The visual stimuli were farm animals acting in simple game scenarios designed to trigger children’s responses to animal sounds, names, and relevant sentences. Based on an extended version of Erber’s auditory development model, the VR game consisted of six stages, i.e., sound detection, sound discrimination, word discrimination, identification, comprehension of words in a carrier phrase, and comprehension of sentences. A familiarization stage (learning) was set prior to the game. Children’s tactile responses were recorded as correct, false, or impulsive, following a child-dependent set up of a valid delay time after stimulus offset for valid responses. Reaction times were also recorded, and the database was in Εxcel format. The tablet version of the auditory rehabilitation tool was piloted in 22 preschool children with Νormal Ηearing (ΝΗ), which led to improvements. The study took place in clinical settings or at children’s homes. Fifteen children with CI, aged 5;7-12;3 years with post-implantation 0;11-5;1 years used the auditory rehabilitation tool. Eight children with CI were monolingual, two were bilingual and five had additional disabilities. The control groups consisted of 13 children with ΝΗ, aged 2;6-9;11 years. A comparison of both accuracy rates, as percent correct, and reaction times (in sec) was made at each stage, across hearing status, age, and also, within the CI group, based on presence of additional disability and bilingualism. Both monolingual Greek-speaking children with CI with no additional disabilities and hearing peers showed high accuracy rates at all stages, with performances falling above the 3rd quartile. However, children with normal hearing scored higher than the children with CI, especially in the detection and word discrimination tasks. The reaction time differences between the two groups decreased in language-based tasks. Results for children with CI with additional disability or bilingualism varied. Finally, older children scored higher than younger ones in both groups (CI, NH), but larger differences occurred in children with CI. The interactions between familiarization of the software, age, hearing status and demographic characteristics are discussed. Overall, the VR game is a promising tool for tracking the development of auditory skills, as it provides multi-level longitudinal empirical data. Acknowledgment: This work is part of a project that has received funding from the Research Committee of the University of Macedonia under the Basic Research 2020-21 funding programme.Keywords: VR serious games, auditory rehabilitation, auditory training, children with cochlear implants
Procedia PDF Downloads 89291 Estimating Multidimensional Water Poverty Index in India: The Alkire Foster Approach
Authors: Rida Wanbha Nongbri, Sabuj Kumar Mandal
Abstract:
The Sustainable Development Goals (SDGs) for 2016-2030 were adopted in response to Millennium Development Goals (MDGs) which focused on access to sustainable water and sanitations. For over a decade, water has been a significant subject that is explored in various facets of life. Our day-to-day life is significantly impacted by water poverty at the socio-economic level. Reducing water poverty is an important policy challenge, particularly in emerging economies like India, owing to its population growth, huge variation in topology and climatic factors. To design appropriate water policies and its effectiveness, a proper measurement of water poverty is essential. In this backdrop, this study uses the Alkire Foster (AF) methodology to estimate a multidimensional water poverty index for India at the household level. The methodology captures several attributes to understand the complex issues related to households’ water deprivation. The study employs two rounds of Indian Human Development Survey data (IHDS 2005 and 2012) which focuses on 4 dimensions of water poverty including water access, water quantity, water quality, and water capacity, and seven indicators capturing these four dimensions. In order to quantify water deprivation at the household level, an AF dual cut-off counting method is applied and Multidimensional Water Poverty Index (MWPI) is calculated as the product of Headcount Ratio (Incidence) and average share of weighted dimension (Intensity). The results identify deprivation across all dimensions at the country level and show that a large proportion of household in India is deprived of quality water and suffers from water access in both 2005 and 2012 survey rounds. The comparison between the rural and urban households shows that higher ratio of the rural households are multidimensionally water poor as compared to their urban counterparts. Among the four dimensions of water poverty, water quality is found to be the most significant one for both rural and urban households. In 2005 round, almost 99.3% of households are water poor for at least one of the four dimensions, and among the water poor households, the intensity of water poverty is 54.7%. These values do not change significantly in 2012 round, but we could observe significance differences across the dimensions. States like Bihar, Tamil Nadu, and Andhra Pradesh are ranked the most in terms of MWPI, whereas Sikkim, Arunachal Pradesh and Chandigarh are ranked the lowest in 2005 round. Similarly, in 2012 round, Bihar, Uttar Pradesh and Orissa rank the highest in terms of MWPI, whereas Goa, Nagaland and Arunachal Pradesh rank the lowest. The policy implications of this study can be multifaceted. It can urge the policy makers to focus either on the impoverished households with lower intensity levels of water poverty to minimize total number of water poor households or can focus on those household with high intensity of water poverty to achieve an overall reduction in MWPI.Keywords: .alkire-foster (AF) methodology, deprivation, dual cut-off, multidimensional water poverty index (MWPI)
Procedia PDF Downloads 70290 The International Legal Protection of Foreign Investment Through Bilateral Investment Treaties and Double Taxation Treaties in the Context of International Investment Law and International Tax Law
Authors: Abdulmajeed Abdullah Alqarni
Abstract:
This paper is devoted a study of the current frameworks applicable to foreign investments at the levels of domestic and international law, with a particular focus on the legitimate balance to be achieved between the rights of the host state and the legal protections owed to foreign investors. At the wider level of analysis, the paper attempts to map and critically examine the relationship between foreign investment and economic development. In doing so, the paper offers a study in how current discourses and practices on investment law can reconcile the competing interests of developing and developed countries. The study draws on the growing economic imperative for developing nations to create a favorable investment climate capable of attracting private foreign investment. It notes that that over the past decades, an abundance of legal standards that establish substantive and procedural protections for legal forms of foreign investments in the host countries have evolved and crystalized. The study then goes on to offer a substantive analysis of legal reforms at the domestic level in countries such as Saudi Arabia before going on to provide an in- depth and substantive examination of the most important instruments developed at the levels of international law: bilateral investment agreements and double taxation agreements. As to its methods, the study draws on case studies and from data assessing the link between double taxation and economic development. Drawing from the extant literature and doctrinal research, and international and comparative jurisprudence, the paper excavates and critically examines contemporary definitions and norms of international investment law, many of which have been given concrete form and specificity in an ever-expanding number of bilateral and multilateral investment treaties. By reconsidering the wider challenges of conflicts of law and jurisdiction, and the competing aims of the modern investment law regime, the study reflects on how bilateral investment treaties might succeed in achieving the dual aims of rights protection and economic sovereignty. Through its examination of the double taxation phenomena, the study goes on to identify key practical challenges raised by the implementation of bilateral treaties whilst also assessing the sufficiency of the domestic and international legal solutions that are proposed in response. In its final analysis, the study aims to contribute to existing scholarship by assessing contemporary legal and economic barriers to the free flow of investment with due regard for the legitimate concerns and diversity of developing nations. It does by situating its analysis of the domestic enforcement of international investment instrument in its wider historical and normative context. By focusing on the economic and legal dimensions of foreign investment, the paper also aims to offer an interdisciplinary and holistic perspective on contemporary issues and developments in investment law while offering practical reform proposals that can be used to be achieve a more equitable balance between the rights and interests of states and private entities in an increasingly trans nationalized sphere of investment regulation and treaty arbitration.Keywords: foreign investment, bilateral investment treaties, international tax law, double taxation treaties
Procedia PDF Downloads 88289 Climate Change and Rural-Urban Migration in Brazilian Semiarid Region
Authors: Linda Márcia Mendes Delazeri, Dênis Antônio Da Cunha
Abstract:
Over the past few years, the evidence that human activities have altered the concentration of greenhouse gases in the atmosphere have become stronger, indicating that this accumulation is the most likely cause of climate change observed so far. The risks associated with climate change, although uncertain, have the potential to increase social vulnerability, exacerbating existing socioeconomic challenges. Developing countries are potentially the most affected by climate change, since they have less potential to adapt and are those most dependent on agricultural activities, one of the sectors in which the major negative impacts are expected. In Brazil, specifically, it is expected that the localities which form the semiarid region are among the most affected, due to existing irregularity in rainfall and high temperatures, in addition to economic and social factors endemic to the region. Given the strategic limitations to handle the environmental shocks caused by climate change, an alternative adopted in response to these shocks is migration. Understanding the specific features of migration flows, such as duration, destination and composition is essential to understand the impacts of migration on origin and destination locations and to develop appropriate policies. Thus, this study aims to examine whether climatic factors have contributed to rural-urban migration in semiarid municipalities in the recent past and how these migration flows will be affected by future scenarios of climate change. The study was based on microeconomic theory of utility maximization, in which, to decide to leave the countryside and move on to the urban area, the individual seeks to maximize its utility. Analytically, we estimated an econometric model using the modeling of Fixed Effects and the results confirmed the expectation that climate drivers are crucial for the occurrence of the rural-urban migration. Also, other drivers of the migration process, as economic, social and demographic factors were also important. Additionally, predictions about the rural-urban migration motivated by variations in temperature and precipitation in the climate change scenarios RCP 4.5 and 8.5 were made for the periods 2016-2035 and 2046-2065, defined by the Intergovernmental Panel on Climate Change (IPCC). The results indicate that there will be increased rural-urban migration in the semiarid region in both scenarios and in both periods. In general, the results of this study reinforce the need for formulations of public policies to avoid migration for climatic reasons, such as policies that give support to the productive activities generating income in rural areas. By providing greater incentives for family agriculture and expanding sources of credit for the farmer, it will have a better position to face climate adversities and to settle in rural areas. Ultimately, if migration becomes necessary, there must be the adoption of policies that seek an organized and planned development of urban areas, considering migration as an adaptation strategy to adverse climate effects. Thus, policies that act to absorb migrants in urban areas and ensure that they have access to basic services offered to the urban population would contribute to the social costs reduction of climate variability.Keywords: climate change, migration, rural productivity, semiarid region
Procedia PDF Downloads 352288 Review of Concepts and Tools Applied to Assess Risks Associated with Food Imports
Authors: A. Falenski, A. Kaesbohrer, M. Filter
Abstract:
Introduction: Risk assessments can be performed in various ways and in different degrees of complexity. In order to assess risks associated with imported foods additional information needs to be taken into account compared to a risk assessment on regional products. The present review is an overview on currently available best practise approaches and data sources used for food import risk assessments (IRAs). Methods: A literature review has been performed. PubMed was searched for articles about food IRAs published in the years 2004 to 2014 (English and German texts only, search string “(English [la] OR German [la]) (2004:2014 [dp]) import [ti] risk”). Titles and abstracts were screened for import risks in the context of IRAs. The finally selected publications were analysed according to a predefined questionnaire extracting the following information: risk assessment guidelines followed, modelling methods used, data and software applied, existence of an analysis of uncertainty and variability. IRAs cited in these publications were also included in the analysis. Results: The PubMed search resulted in 49 publications, 17 of which contained information about import risks and risk assessments. Within these 19 cross references were identified to be of interest for the present study. These included original articles, reviews and guidelines. At least one of the guidelines of the World Organisation for Animal Health (OIE) and the Codex Alimentarius Commission were referenced in any of the IRAs, either for import of animals or for imports concerning foods, respectively. Interestingly, also a combination of both was used to assess the risk associated with the import of live animals serving as the source of food. Methods ranged from full quantitative IRAs using probabilistic models and dose-response models to qualitative IRA in which decision trees or severity tables were set up using parameter estimations based on expert opinions. Calculations were done using @Risk, R or Excel. Most heterogeneous was the type of data used, ranging from general information on imported goods (food, live animals) to pathogen prevalence in the country of origin. These data were either publicly available in databases or lists (e.g., OIE WAHID and Handystatus II, FAOSTAT, Eurostat, TRACES), accessible on a national level (e.g., herd information) or only open to a small group of people (flight passenger import data at national airport customs office). In the IRAs, an uncertainty analysis has been mentioned in some cases, but calculations have been performed only in a few cases. Conclusion: The current state-of-the-art in the assessment of risks of imported foods is characterized by a great heterogeneity in relation to general methodology and data used. Often information is gathered on a case-by-case basis and reformatted by hand in order to perform the IRA. This analysis therefore illustrates the need for a flexible, modular framework supporting the connection of existing data sources with data analysis and modelling tools. Such an infrastructure could pave the way to IRA workflows applicable ad-hoc, e.g. in case of a crisis situation.Keywords: import risk assessment, review, tools, food import
Procedia PDF Downloads 302287 Online Monitoring and Control of Continuous Mechanosynthesis by UV-Vis Spectrophotometry
Authors: Darren A. Whitaker, Dan Palmer, Jens Wesholowski, James Flaherty, John Mack, Ahmad B. Albadarin, Gavin Walker
Abstract:
Traditional mechanosynthesis has been performed by either ball milling or manual grinding. However, neither of these techniques allow the easy application of process control. The temperature may change unpredictably due to friction in the process. Hence the amount of energy transferred to the reactants is intrinsically non-uniform. Recently, it has been shown that the use of Twin-Screw extrusion (TSE) can overcome these limitations. Additionally, TSE enables a platform for continuous synthesis or manufacturing as it is an open-ended process, with feedstocks at one end and product at the other. Several materials including metal-organic frameworks (MOFs), co-crystals and small organic molecules have been produced mechanochemically using TSE. The described advantages of TSE are offset by drawbacks such as increased process complexity (a large number of process parameters) and variation in feedstock flow impacting on product quality. To handle the above-mentioned drawbacks, this study utilizes UV-Vis spectrophotometry (InSpectroX, ColVisTec) as an online tool to gain real-time information about the quality of the product. Additionally, this is combined with real-time process information in an Advanced Process Control system (PharmaMV, Perceptive Engineering) allowing full supervision and control of the TSE process. Further, by characterizing the dynamic behavior of the TSE, a model predictive controller (MPC) can be employed to ensure the process remains under control when perturbed by external disturbances. Two reactions were studied; a Knoevenagel condensation reaction of barbituric acid and vanillin and, the direct amidation of hydroquinone by ammonium acetate to form N-Acetyl-para-aminophenol (APAP) commonly known as paracetamol. Both reactions could be carried out continuously using TSE, nuclear magnetic resonance (NMR) spectroscopy was used to confirm the percentage conversion of starting materials to product. This information was used to construct partial least squares (PLS) calibration models within the PharmaMV development system, which relates the percent conversion to product to the acquired UV-Vis spectrum. Once this was complete, the model was deployed within the PharmaMV Real-Time System to carry out automated optimization experiments to maximize the percentage conversion based on a set of process parameters in a design of experiments (DoE) style methodology. With the optimum set of process parameters established, a series of PRBS process response tests (i.e. Pseudo-Random Binary Sequences) around the optimum were conducted. The resultant dataset was used to build a statistical model and associated MPC. The controller maximizes product quality whilst ensuring the process remains at the optimum even as disturbances such as raw material variability are introduced into the system. To summarize, a combination of online spectral monitoring and advanced process control was used to develop a robust system for optimization and control of two TSE based mechanosynthetic processes.Keywords: continuous synthesis, pharmaceutical, spectroscopy, advanced process control
Procedia PDF Downloads 179286 Glucose Measurement in Response to Environmental and Physiological Challenges: Towards a Non-Invasive Approach to Study Stress in Fishes
Authors: Tomas Makaras, Julija Razumienė, Vidutė Gurevičienė, Gintarė Sauliutė, Milda Stankevičiūtė
Abstract:
Stress responses represent animal’s natural reactions to various challenging conditions and could be used as a welfare indicator. Regardless of the wide use of glucose measurements in stress evaluation, there are some inconsistencies in its acceptance as a stress marker, especially when it comes to comparison with non-invasive cortisol measurements in the fish challenging stress. To meet the challenge and to test the reliability and applicability of glucose measurement in practice, in this study, different environmental/anthropogenic exposure scenarios were simulated to provoke chemical-induced stress in fish (14-days exposure to landfill leachate) followed by a 14-days stress recovery period and under the cumulative effect of leachate fish subsequently exposed to pathogenic oomycetes (Saprolegnia parasitica) to represent a possible infection in fish. It is endemic to all freshwater habitats worldwide and is partly responsible for the decline of natural freshwater fish populations. Brown trout (Salmo trutta fario) and sea trout (Salmo trutta trutta) juveniles were chosen because of a large amount of literature on physiological stress responses in these species was known. Glucose content in fish by applying invasive and non-invasive glucose measurement procedures in different test mediums such as fish blood, gill tissues and fish-holding water were analysed. The results indicated that the quantity of glucose released in the holding water of stressed fish increased considerably (approx. 3.5- to 8-fold) and remained substantially higher (approx. 2- to 4-fold) throughout the stress recovery period than the control level suggesting that fish did not recover from chemical-induced stress. The circulating levels of glucose in blood and gills decreased over time in fish exposed to different stressors. However, the gill glucose level in fish showed a decrease similar to the control levels measured at the same time points, which was found to be insignificant. The data analysis showed that concentrations of β-D glucose measured in gills of fish treated with S. parasitica differed significantly from the control recovery, but did not differ from the leachate recovery group showing that S. parasitica presence in water had no additive effects. In contrast, a positive correlation between blood and gills glucose were determined. Parallel trends in blood and water glucose changes suggest that water glucose measurement has much potency in predicting stress. This study demonstrated that measuring β-D-glucose in fish-holding water is not stressful as it involves no handling and manipulation of an organism and has critical technical advantages concerning current (invasive) methods, mainly using blood samples or specific tissues. The quantification of glucose could be essential for studies examining the stress physiology/aquaculture studies interested in the assessment or long-term monitoring of fish health.Keywords: brown trout, landfill leachate, sea trout, pathogenic oomycetes, β-D-glucose
Procedia PDF Downloads 174285 Anticancer Potentials of Aqueous Tinospora cordifolia and Its Bioactive Polysaccharide, Arabinogalactan on Benzo(a)Pyrene Induced Pulmonary Tumorigenesis: A Study with Relevance to Blood Based Biomarkers
Authors: Vandana Mohan, Ashwani Koul
Abstract:
Aim: To evaluate the potential of Aqueous Tinospora cordifolia stem extract (Aq.Tc) and Arabinogalactan (AG) on pulmonary carcinogenesis and associated tumor markers. Background: Lung cancer is one of the most frequent malignancy with high mortality rate due to limitation of early detection resulting in low cure rates. Current research effort focuses on identifying some blood-based biomarkers like CEA, ctDNA and LDH which may have potential to detect cancer at an early stage, evaluation of therapeutic response and its recurrence. Medicinal plants and their active components have been widely investigated for their anticancer potentials. Aqueous preparation of T. Cordifolia extract is enriched in the polysaccharide fraction i.e., AG when compared with other types of extract. Moreover, reports are available of polysaccharide fraction of T. Cordifolia in in vitro lung cancer models which showed profound anti-metastatic activity against these cell lines. However, not much has been explored about its effect in in vivo lung cancer models and the underlying mechanism involved. Experimental Design: Mice were randomly segregated into six groups. Group I animals served as control. Group II animals were administered with Aq. Tc extract (200 mg/kg b.w.) p.o.on the alternate days. Group III animals were fed with AG (7.5 mg/kg b.w.) p.o. on the alternate days (thrice a week). Group IV animals were installed with Benzo(a)pyrene (50 mg/kg b.w.), i.p. twice within an interval of two weeks. Group V animals received Aq. Tc extract as in group II along with it B(a)P was installed after two weeks of Aq. Tc administration following the same protocol as for group IV. Group VI animals received AG as in group III along with it B(a)P was installed after two weeks of AG administration. Results: Administration of B(a)P to mice resulted in increased tumor incidence, multiplicity and pulmonary somatic index with concomitant increase in serum/plasma markers like CEA, ctDNA, LDH and TNF-α.Aq.Tc and AG supplementation significantly attenuated these alterations at different stages of tumorigenesis thereby showing potent anti-cancer effect in lung cancer. A pronounced decrease in serum/plasma markers were observed in animals treated with Aq.Tc as compared to those fed with AG. Also, extensive hyperproliferation of alveolar epithelium was prominent in B(a)P induced lung tumors. However, treatment of Aq.Tc and AG to lung tumor bearing mice exhibited reduced alveolar damage evident from decreased number of hyperchromatic irregular nuclei. A direct correlation between the concentration of tumor markers and the intensity of lung cancer was observed in animals bearing cancer co-treated with Aq.Tc and AG. Conclusion: These findings substantiate the chemopreventive potential of Aq.Tc and AG against lung tumorigenesis. Interestingly, Aq.Tc was found to be more effective in modulating the cancer as reflected by various observations which may be attributed to the synergism offered by various components of Aq.Tc. Further studies are in progress to understand the underlined mechanism in inhibiting lung tumorigenesis by Aq.Tc and AG.Keywords: Arabinogalactan, Benzo(a)pyrene B(a)P, carcinoembryonic antigen (CEA), circulating tumor DNA (ctDNA), lactate dehydrogenase (LDH), Tinospora cordifolia
Procedia PDF Downloads 185284 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface
Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto
Abstract:
Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns
Procedia PDF Downloads 129