Search results for: fundamental frequencies
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2603

Search results for: fundamental frequencies

473 Detection, Analysis and Determination of the Origin of Copy Number Variants (CNVs) in Intellectual Disability/Developmental Delay (ID/DD) Patients and Autistic Spectrum Disorders (ASD) Patients by Molecular and Cytogenetic Methods

Authors: Pavlina Capkova, Josef Srovnal, Vera Becvarova, Marie Trkova, Zuzana Capkova, Andrea Stefekova, Vaclava Curtisova, Alena Santava, Sarka Vejvalkova, Katerina Adamova, Radek Vodicka

Abstract:

ASDs are heterogeneous and complex developmental diseases with a significant genetic background. Recurrent CNVs are known to be a frequent cause of ASD. These CNVs can have, however, a variable expressivity which results in a spectrum of phenotypes from asymptomatic to ID/DD/ASD. ASD is associated with ID in ~75% individuals. Various platforms are used to detect pathogenic mutations in the genome of these patients. The performed study is focused on a determination of the frequency of pathogenic mutations in a group of ASD patients and a group of ID/DD patients using various strategies along with a comparison of their detection rate. The possible role of the origin of these mutations in aetiology of ASD was assessed. The study included 35 individuals with ASD and 68 individuals with ID/DD (64 males and 39 females in total), who underwent rigorous genetic, neurological and psychological examinations. Screening for pathogenic mutations involved karyotyping, screening for FMR1 mutations and for metabolic disorders, a targeted MLPA test with probe mixes Telomeres 3 and 5, Microdeletion 1 and 2, Autism 1, MRX and a chromosomal microarray analysis (CMA) (Illumina or Affymetrix). Chromosomal aberrations were revealed in 7 (1 in the ASD group) individuals by karyotyping. FMR1 mutations were discovered in 3 (1 in the ASD group) individuals. The detection rate of pathogenic mutations in ASD patients with a normal karyotype was 15.15% by MLPA and CMA. The frequencies of the pathogenic mutations were 25.0% by MLPA and 35.0% by CMA in ID/DD patients with a normal karyotype. CNVs inherited from asymptomatic parents were more abundant than de novo changes in ASD patients (11.43% vs. 5.71%) in contrast to the ID/DD group where de novo mutations prevailed over inherited ones (26.47% vs. 16.18%). ASD patients shared more frequently their mutations with their fathers than patients from ID/DD group (8.57% vs. 1.47%). Maternally inherited mutations predominated in the ID/DD group in comparison with the ASD group (14.7% vs. 2.86 %). CNVs of an unknown significance were found in 10 patients by CMA and in 3 patients by MLPA. Although the detection rate is the highest when using CMA, recurrent CNVs can be easily detected by MLPA. CMA proved to be more efficient in the ID/DD group where a larger spectrum of rare pathogenic CNVs was revealed. This study determined that maternally inherited highly penetrant mutations and de novo mutations more often resulted in ID/DD without ASD in patients. The paternally inherited mutations could be, however, a source of the greater variability in the genome of the ASD patients and contribute to the polygenic character of the inheritance of ASD. As the number of the subjects in the group is limited, a larger cohort is needed to confirm this conclusion. Inherited CNVs have a role in aetiology of ASD possibly in combination with additional genetic factors - the mutations elsewhere in the genome. The identification of these interactions constitutes a challenge for the future. Supported by MH CZ – DRO (FNOl, 00098892), IGA UP LF_2016_010, TACR TE02000058 and NPU LO1304.

Keywords: autistic spectrum disorders, copy number variant, chromosomal microarray, intellectual disability, karyotyping, MLPA, multiplex ligation-dependent probe amplification

Procedia PDF Downloads 349
472 Interaction Between Task Complexity and Collaborative Learning on Virtual Patient Design: The Effects on Students’ Performance, Cognitive Load, and Task Time

Authors: Fatemeh Jannesarvatan, Ghazaal Parastooei, Jimmy frerejan, Saedeh Mokhtari, Peter Van Rosmalen

Abstract:

Medical and dental education increasingly emphasizes the acquisition, integration, and coordination of complex knowledge, skills, and attitudes that can be applied in practical situations. Instructional design approaches have focused on using real-life tasks in order to facilitate complex learning in both real and simulated environments. The Four component instructional design (4C/ID) model has become a useful guideline for designing instructional materials that improve learning transfer, especially in health profession education. The objective of this study was to apply the 4C/ID model in the creation of virtual patients (VPs) that dental students can use to practice their clinical management and clinical reasoning skills. The study first explored the context and concept of complication factors and common errors for novices and how they can affect the design of a virtual patient program. The study then selected key dental information and considered the content needs of dental students. The design of virtual patients was based on the 4C/ID model's fundamental principles, which included: Designing learning tasks that reflect real patient scenarios and applying different levels of task complexity to challenge students to apply their knowledge and skills in different contexts. Creating varied learning materials that support students during the VP program and are closely integrated with the learning tasks and students' curricula. Cognitive feedback was provided at different levels of the program. Providing procedural information where students followed a step-by-step process from history taking to writing a comprehensive treatment plan. Four virtual patients were designed using the 4C/ID model's principles, and an experimental design was used to test the effectiveness of the principles in achieving the intended educational outcomes. The 4C/ID model provides an effective framework for designing engaging and successful virtual patients that support the transfer of knowledge and skills for dental students. However, there are some challenges and pitfalls that instructional designers should take into account when developing these educational tools.

Keywords: 4C/ID model, virtual patients, education, dental, instructional design

Procedia PDF Downloads 80
471 Consideration for a Policy Change to the South African Collective Bargaining Process: A Reflection on National Union of Metalworkers of South Africa v Trenstar (Pty) (2023) 44 ILJ 1189 (CC)

Authors: Carlos Joel Tchawouo Mbiada

Abstract:

At the back of the apartheid era, South Africa embarked on a democratic drive of all its institution underpinned by a social justice perspective to eradicate past injustices. These democratic values based on fundamental human rights and equality informed all rights enshrined in the Constitution of the Republic of South Africa, 1996. This means that all rights are therefore infused by social justice perspective and labour rights are no exception. Labour law is therefore regulated to the extent that it is viewed as too rigid. Hence a call for more flexibility to enhance investment and boost job creation. This view articulated by the Free Market Foundation fell on deaf ears as the opponents believe in what is termed regulated flexibility which affords greater protection to vulnerable workers while promoting business opportunities and investment. The question that this paper seeks to examine is to what extent the regulation of labour law will go to protect employees. This question is prompted by the recent Constitutional Court’s judgment of National Union of Metalworkers of South Africa v Trenstar which barred the employer from employing labour replacement in response to the strike action by its employees. The question whether employers may use replacement labour and have recourse to lock-outs in response to strike action is considered in the context of the dichotomy between the Free market foundation and social justice perspectives which are at loggerheads in the South African collective bargaining process. With the current unemployment rate soaring constantly, the aftermath of the Covid 19 pandemic, the effects of the war in Ukraine and lately the financial burden of load shedding on companies to run their businesses, this paper argues for a policy shift toward deregulation or a lesser state and judiciary intervention. This initiative will relieve the burden on companies to run a viable business while at the same time protecting existing jobs.

Keywords: labour law, replacement labour, right to strike, free market foundation perspective, social justice perspective

Procedia PDF Downloads 103
470 Test Procedures for Assessing the Peel Strength and Cleavage Resistance of Adhesively Bonded Joints with Elastic Adhesives under Detrimental Service Conditions

Authors: Johannes Barlang

Abstract:

Adhesive bonding plays a pivotal role in various industrial applications, ranging from automotive manufacturing to aerospace engineering. The peel strength of adhesives, a critical parameter reflecting the ability of an adhesive to withstand external forces, is crucial for ensuring the integrity and durability of bonded joints. This study provides a synopsis of the methodologies, influencing factors, and significance of peel testing in the evaluation of adhesive performance. Peel testing involves the measurement of the force required to separate two bonded substrates under controlled conditions. This study systematically reviews the different testing techniques commonly applied in peel testing, including the widely used 180-degree peel test and the T-peel test. Emphasis is placed on the importance of selecting an appropriate testing method based on the specific characteristics of the adhesive and the application requirements. The influencing factors on peel strength are multifaceted, encompassing adhesive properties, substrate characteristics, environmental conditions, and test parameters. Through an in-depth analysis, this study explores how factors such as adhesive formulation, surface preparation, temperature, and peel rate can significantly impact the peel strength of adhesively bonded joints. Understanding these factors is essential for optimizing adhesive selection and application processes in real-world scenarios. Furthermore, the study highlights the role of peel testing in quality control and assurance, aiding manufacturers in maintaining consistent adhesive performance and ensuring the reliability of bonded structures. The correlation between peel strength and long-term durability is discussed, shedding light on the predictive capabilities of peel testing in assessing the service life of adhesive bonds. In conclusion, this study underscores the significance of peel testing as a fundamental tool for characterizing adhesive performance. By delving into testing methodologies, influencing factors, and practical implications, this study contributes to the broader understanding of adhesive behavior and fosters advancements in adhesive technology across diverse industrial sectors.

Keywords: adhesively bonded joints, cleavage resistance, elastic adhesives, peel strength

Procedia PDF Downloads 95
469 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales

Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle

Abstract:

Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.

Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics

Procedia PDF Downloads 155
468 PWM Harmonic Injection and Frequency-Modulated Triangular Carrier to Improve the Lives of the Transformers

Authors: Mario J. Meco-Gutierrez, Francisco Perez-Hidalgo, Juan R. Heredia-Larrubia, Antonio Ruiz-Gonzalez, Francisco Vargas-Merino

Abstract:

More and more applications power inverters connected to transformers, for example, the connection facilities to the power grid renewable generation. It is well known that the quality of signal power inverters it is not a pure sine. The harmonic content produced negative effects, one of which is the heating of electrical machines and therefore, affects the life of the machines. The decrease of life of transformers can be calculated by Arrhenius or Montsinger equation. Analyzing this expression any (long-term) decrease of a transformer temperature for 6º C - 7º C means doubles its life-expectancy. Methodologies: This work presents the technique of pulse width modulation (PWM) with an injection of harmonic and triangular frequency carrier modulated in frequency. This technique is used to improve the quality of the output voltage signal of the power inverters controlled PWM. The proposed technique increases in the fundamental term and a significant reduction in low order harmonics with the same commutations per time that control sine PWM. To achieve this, the modulating wave is compared to a triangular carrier with variable frequency over the period of the modulator. Therefore, it is, advantageous for the modulating signal to have a large amount of sinusoidal “information” in the areas of greater sampling. A triangular signal with a frequency that varies over the modulator’s period is used as a carrier, for obtaining more samples in the area with the greatest slope. A power inverter controlled by PWM proposed technique is connected to a transformer. Results: In order to verify the derived thermal parameters under different operation conditions, another ambient and loading scenario is involved for a further verification, which was sampled from the same power transformer. Temperatures of different parts of the transformer will be exposed for each PWM control technique analyzed. An assessment of the temperature be done with different techniques PWM control and hence the life of the transformer is calculated for each technique. Conclusion: This paper analyzes such as transformer heating produced by this technique and compared with other forms of PWM control. In it can be seen as a reduction the harmonic content produces less heat transformer and therefore, an increase in the life of the transformer.

Keywords: heating, power-inverter, PWM, transformer

Procedia PDF Downloads 412
467 Direct Contact Ultrasound Assisted Drying of Mango Slices

Authors: E. K. Mendez, N. A. Salazar, C. E. Orrego

Abstract:

There is undoubted proof that increasing the intake of fruit lessens the risk of hypertension, coronary heart disease, stroke, and probable evidence that lowers the risk of cancer. Proper fruit drying is an excellent alternative to make their shelf-life longer, commercialization easier, and ready-to-eat healthy products or ingredients. The conventional way of drying is by hot air forced convection. However, this process step often requires a very long residence time; furthermore, it is highly energy consuming and detrimental to the product quality. Nowadays, power ultrasound (US) technic has been considered as an emerging and promising technology for industrial food processing. Most of published works dealing with drying food assisted by US have studied the effect of ultrasonic pre-treatment prior to air-drying on food and the airborne US conditions during dehydration. In this work a new approach was tested taking in to account drying time and two quality parameters of mango slices dehydrated by convection assisted by 20 KHz power US applied directly using a holed plate as product support and sound transmitting surface. During the drying of mango (Mangifera indica L.) slices (ca. 6.5 g, 0.006 m height and 0.040 m diameter), their weight was recorded every hour until final moisture content (10.0±1.0 % wet basis) was reached. After previous tests, optimization of three drying parameters - frequencies (2, 5 and 8 minutes each half-hour), air temperature (50-55-60⁰C) and power (45-70-95W)- was attempted by using a Box–Behnken design under the response surface methodology for the optimal drying time, color parameters and rehydration rate of dried samples. Assays involved 17 experiments, including a quintuplicate of the central point. Dried samples with and without US application were packed in individual high barrier plastic bags under vacuum, and then stored in the dark at 8⁰C until their analysis. All drying assays and sample analysis were performed in triplicate. US drying experimental data were fitted with nine models, among which the Verna model resulted in the best fit with R2 > 0.9999 and reduced χ2 ≤ 0.000001. Significant reductions in drying time were observed for the assays that used lower frequency and high US power. At 55⁰C, 95 watts and 2 min/30 min of sonication, 10% moisture content was reached in 211 min, as compared with 320 min for the same test without the use of US (blank). Rehydration rates (RR), defined as the ratio of rehydrated sample weight to that of dry sample and measured, was also larger than those of blanks and, in general, the higher the US power, the greater the RR. The direct contact and intermittent US treatment of mango slices used in this work improve drying rates and dried fruit rehydration ability. This technique can thus be used to reduce energy processing costs and the greenhouse gas emissions of fruit dehydration.

Keywords: ultrasonic assisted drying, fruit drying, mango slices, contact ultrasonic drying

Procedia PDF Downloads 345
466 Adsorptive Media Selection for Bilirubin Removal: An Adsorption Equilibrium Study

Authors: Vincenzo Piemonte

Abstract:

The liver is a complex, large-scale biochemical reactor which plays a unique role in the human physiology. When liver ceases to perform its physiological activity, a functional replacement is required. Actually, liver transplantation is the only clinically effective method of treating severe liver disease. Anyway, the aforementioned therapeutic approach is hampered by the disparity between organ availability and the number of patients on the waiting list. In order to overcome this critical issue, research activities focused on liver support device systems (LSDs) designed to bridging patients to transplantation or to keep them alive until the recovery of native liver function. In recirculating albumin dialysis devices, such as MARS (Molecular Adsorbed Recirculating System), adsorption is one of the fundamental steps in albumin-dialysate regeneration. Among the albumin-bound toxins that must be removed from blood during liver-failure therapy, bilirubin and tryptophan can be considered as representative of two different toxin classes. The first one, not water soluble at physiological blood pH and strongly bounded to albumin, the second one, loosely albumin bound and partially water soluble at pH 7.4. Fixed bed units are normally used for this task, and the design of such units requires information both on toxin adsorption equilibrium and kinetics. The most common adsorptive media used in LSDs are activated carbon, non-ionic polymeric resins and anionic resins. In this paper, bilirubin adsorption isotherms on different adsorptive media, such as polymeric resin, albumin-coated resin, anionic resin, activated carbon and alginate beads with entrapped albumin are presented. By comparing all the results, it can be stated that the adsorption capacity for bilirubin of the five different media increases in the following order: Alginate beads < Polymeric resin < Albumin-coated resin < Activated carbon < Anionic resin. The main focus of this paper is to provide useful guidelines for the optimization of liver support devices which implement adsorption columns to remove albumin-bound toxins from albumin dialysate solutions.

Keywords: adsorptive media, adsorption equilibrium, artificial liver devices, bilirubin, mathematical modelling

Procedia PDF Downloads 256
465 Social Economic Factors Associated with the Nutritional Status of Children In Western Uganda

Authors: Baguma Daniel Kajura

Abstract:

The study explores socio-economic factors, health related and individual factors that influence the breastfeeding habits of mothers and their effect on the nutritional status of their infants in the Rwenzori region of Western Uganda. A cross-sectional research design was adopted, and it involved the use of self-administered questionnaires, interview guides, and focused group discussion guides to assess the extent to which socio-demographic factors associated with breastfeeding practices influence child malnutrition. Using this design, data was collected from 276 mother-paired infants out of the selected 318 mother-paired infants over a period of ten days. Using a sample size formula by Kish Leslie for cross-sectional studies N= Zα2 P (1- P) / δ2, where N= sample size estimate of paired mother paired infants. P= assumed true population prevalence of mother–paired infants with malnutrition cases, P = 29.3%. 1-P = the probability of mother-paired infants not having malnutrition, so 1-P = 70.7% Zα = Standard normal deviation at 95% confidence interval corresponding to 1.96.δ = Absolute error between the estimated and true population prevalence of malnutrition of 5%. The calculated sample size N = 1.96 × 1.96 (0.293 × 0.707) /0,052= 318 mother paired infants. Demographic and socio-economic data for all mothers were entered into Microsoft Excel software and then exported to STATA 14 (StataCorp, 2015). Anthropometric measurements were taken for all children by the researcher and the trained assistants who physically weighed the children. The use of immunization card was used to attain the age of the child. The bivariate logistic regression analysis was used to assess the relationship between socio-demographic factors associated with breastfeeding practices and child malnutrition. The multivariable regression analysis was used to draw a conclusion on whether or not there are any true relationships between the socio-demographic factors associated with breastfeeding practices as independent variables and child stunting and underweight as dependent variables in relation to breastfeeding practices. Descriptive statistics on background characteristics of the mothers were generated and presented in frequency distribution tables. Frequencies and means were computed, and the results were presented using tables, then, we determined the distribution of stunting and underweight among infants by the socioeconomic and demographic factors. Findings reveal that children of mothers who used milk substitutes besides breastfeeding are over two times more likely to be stunted compared to those whose mothers exclusively breastfed them. Feeding children with milk substitutes instead of breastmilk predisposes them to both stunting and underweight. Children of mothers between 18 and 34 years of age are less likely to be underweight, as were those who were breastfed over ten times a day. The study further reveals that 55% of the children were underweight, and 49% were stunted. Of the underweight children, an equal number (58/151) were either mildly or moderately underweight (38%), and 23% (35/151) were severely underweight. Empowering community outreach programs by increasing knowledge and increased access to services on integrated management of child malnutrition is crucial to curbing child malnutrition in rural areas.

Keywords: infant and young child feeding, breastfeeding, child malnutrition, maternal health

Procedia PDF Downloads 20
464 A Semi-Markov Chain-Based Model for the Prediction of Deterioration of Concrete Bridges in Quebec

Authors: Eslam Mohammed Abdelkader, Mohamed Marzouk, Tarek Zayed

Abstract:

Infrastructure systems are crucial to every aspect of life on Earth. Existing Infrastructure is subjected to degradation while the demands are growing for a better infrastructure system in response to the high standards of safety, health, population growth, and environmental protection. Bridges play a crucial role in urban transportation networks. Moreover, they are subjected to high level of deterioration because of the variable traffic loading, extreme weather conditions, cycles of freeze and thaw, etc. The development of Bridge Management Systems (BMSs) has become a fundamental imperative nowadays especially in the large transportation networks due to the huge variance between the need for maintenance actions, and the available funds to perform such actions. Deterioration models represent a very important aspect for the effective use of BMSs. This paper presents a probabilistic time-based model that is capable of predicting the condition ratings of the concrete bridge decks along its service life. The deterioration process of the concrete bridge decks is modeled using semi-Markov process. One of the main challenges of the Markov Chain Decision Process (MCDP) is the construction of the transition probability matrix. Yet, the proposed model overcomes this issue by modeling the sojourn times based on some probability density functions. The sojourn times of each condition state are fitted to probability density functions based on some goodness of fit tests such as Kolmogorov-Smirnov test, Anderson Darling, and chi-squared test. The parameters of the probability density functions are obtained using maximum likelihood estimation (MLE). The condition ratings obtained from the Ministry of Transportation in Quebec (MTQ) are utilized as a database to construct the deterioration model. Finally, a comparison is conducted between the Markov Chain and semi-Markov chain to select the most feasible prediction model.

Keywords: bridge management system, bridge decks, deterioration model, Semi-Markov chain, sojourn times, maximum likelihood estimation

Procedia PDF Downloads 211
463 Scaling up Small and Sick Newborn Care Through the Establishment of the First Human Milk Bank in Nepal

Authors: Prajwal Paudel, Shreeprasad Adhikari, Shailendra Bir Karmacharya, Kalpana Upadhyaya

Abstract:

Background: Human milk banks have been recommended by the World Health Organization (WHO) for newborn and child nourishment in the provision of optimum nutrition as an alternative to breastfeeding in circumstances when direct breastfeeding is inaccessible. The vulnerable group of babies, mainly preterm, low birth weight, and sick newborns, are at a greater risk of mortality and possibly benefit from the safe use of donated human milk through milk banks. In this study, we aimed to shed light on the process involved during the setting up of the nation’s first milk bank and its vitality in small and sick newborn nutrition and care. Methods: The study was conducted in Paropakar Maternity and Women’s Hospital, where the first human milk (HMB) was established. The establishment involved a stepwise process of need assessment meeting, formation of the HMB committee, learning visit to HMB in India, studying the strengths and weaknesses of promoting breastfeeding and HMB system integration, procurement, installation, and setting up the infrastructure, and developing technical competency, launching of the HMB. After the initiation of HMB services, information regarding the recruited donor mothers and the volume of milk pasteurized and consumed by the needy recipient babies were recorded. Descriptive statistics with frequencies and percentages were used to describe the utilization of HMB services. Results: During the study period, a total of 506113 ml of milk was collected, while 49930 ml of milk was pasteurized. Of the pasteurized milk, 381248 ml of milk was dispensed. The total volume of milk received was from a total of 883 after proper routine screening tests. Similarly, the total number of babies who received the donated human milk (DHM) was 912 with different neonatal conditions. Among the babies who received DHM, 527(57.7%) were born via CS, and 385 (42.21%) were delivered normally. In the birth weight category,9 (1%) of the babies were less than 1000 grams, 75 (8.2%) were less than 1500 grams, 405 (44.4%) were between 1500 to less than 2500 grams whereas, 423 (46.4%) of the babies who received DHM were normal weight babies. Among the sick newborns, perinatal asphyxia accounted for 166 (18.2%), preterm with other complications 372 (40.7%), preterm 23 (2.02%), respiratory distress 140 (15.35%), neonatal jaundice 150 (16.44%), sepsis 94 (10.30%), meconium aspiration syndrome 9(1%), seizure disorder 28 (3.07%), congenital anomalies 13 (1.42%) and others 33(3. 61%). The neonatal mortality rate dropped to 6.2/1000 live births from 7.5/1000 live births in the first year of establishment as compared to the previous year. Conclusion: The establishment of the first HMB in Nepal involved a comprehensive approach to integrate a new system with the existing newborn care in the provision of safe DHM. Premature babies with complication, babies born via CS, perinatal asphyxia and babies with sepsis consumed the greater proportion of DHM. Rigorous research is warranted to assess the impact of DHM in small and sick newborn who otherwise would be fed formula milk.

Keywords: human milk bank, sick-newborn, mortality, neonatal nutrition

Procedia PDF Downloads 11
462 Lack of Physical Activity In Schools: Study Carried Out on School-aged Adolescents

Authors: Bencharif Meriem, Sersar Ibrahim, Djaafri Zineb

Abstract:

Introduction and purpose of the study: Education plays a fundamental role in the lives of young people, but what about their physical well-being as they spend long hours sitting at school? School inactivity is a problem that deserves particular attention because it can have significant repercussions on the health and development of students. The aim of this study was to describe and evaluate the physical activity of students in different practices in class, at recess and in the canteen. Material and methods: A physical activity diary and an anthropometric measurement sheet (weight, height) were provided to 123 school-aged adolescents. The measurements were carried out according to international recommendations. The statistical tests were carried out with the R software. 3.2.4. The significance threshold retained was 0.05. Results and Statistical Analysis: One hundred and twenty-three students agreed to participate in the study. Their average age was 16.5±1.60 years. Overweight was present in 8.13% and obesity in 4.06%. For the practice of physical activity, during physical education and sports classes, all students played sports with an average of 1.94±1.00 hours/week, of which 74.00% sweated or were out of breath during these hours of physical activity. It was also noted that boys practiced sports more than girls (p<0.0001). Each day, on average, students spent 39.78±37.85 min walking or running during recess. On the other hand, they spent, on average 4.25±2.65 hours sitting per day in class, at recess, in the canteen, etc., without counting the time spent in front of a screen. The increasing use of screens has become a major concern for parents and educators. On average, students spent approximately 42.90±38.41 min per day using screens in class, at recess, in the canteen and at home. (computer, tablet, telephone, video games, etc.) and therefore to a prolonged sedentary lifestyle. On average, students sat for more than 1.5 hours without moving for at least 2 minutes in a row approximately 1.72±0.71 times per day. Conclusion: These students spent many hours sitting at school. This prolonged inactivity can have negative consequences on their health, including problems with posture and cardiovascular health. It is crucial that schools, educators and parents collaborate to promote more active learning environments where students can move more and thus contribute to their overall well-being. It's time to rethink how we approach education and student health to give them a healthier, more active future.

Keywords: physical acivity, sedentarity, adolescents, school

Procedia PDF Downloads 60
461 Creative Mathematically Modelling Videos Developed by Engineering Students

Authors: Esther Cabezas-Rivas

Abstract:

Ordinary differential equations (ODE) are a fundamental part of the curriculum for most engineering degrees, and students typically have difficulties in the subsequent abstract mathematical calculations. To enhance their motivation and profit that they are digital natives, we propose a teamwork project that includes the creation of a video. It should explain how to model mathematically a real-world problem transforming it into an ODE, which should then be solved using the tools learned in the lectures. This idea was indeed implemented with first-year students of a BSc in Engineering and Management during the period of online learning caused by the outbreak of COVID-19 in Spain. Each group of 4 students was assigned a different topic: model a hot water heater, search for the shortest path, design the quickest route for delivery, cooling a computer chip, the shape of the hanging cables of the Golden Gate, detecting land mines, rocket trajectories, etc. These topics should be worked out through two complementary channels: a written report describing the problem and a 10-15 min video on the subject. The report includes the following items: description of the problem to be modeled, detailed obtention of the ODE that models the problem, its complete solution, and interpretation in the context of the original problem. We report the outcomes of this teaching in context and active learning experience, including the feedback received by the students. They highlighted the encouragement of creativity and originality, which are skills that they do not typically relate to mathematics. Additionally, the video format (unlike a common presentation) has the advantage of allowing them to critically review and self-assess the recording, repeating some parts until the result is satisfactory. As a side effect, they felt more confident about their oral abilities. In short, students agreed that they had fun preparing the video. They recognized that it was tricky to combine deep mathematical contents with entertainment since, without the latter, it is impossible to engage people to view the video till the end. Despite this difficulty, after the activity, they claimed to understand better the material, and they enjoyed showing the videos to family and friends during and after the project.

Keywords: active learning, contextual teaching, models in differential equations, student-produced videos

Procedia PDF Downloads 145
460 Embedding Looping Concept into Corporate CSR Strategy for Sustainable Growth: An Exploratory Study

Authors: Vani Tanggamani, Azlan Amran

Abstract:

The issues of Corporate Social Responsibility (CSR) have been extended from developmental economics to corporate and business in recent years. Research in issues related to CSR is deemed to make higher impacts as CSR encourages long-term economy and business success without neglecting social, environmental risks, obligations and opportunities. Therefore, CSR is a key matter for any organisation aiming for long term sustainability since business incorporates principles of social responsibility into each of its business decisions. Thus, this paper presents a theoretical proposition based on stakeholder theory from the organisational perspective as a foundation for better CSR practices. The primary subject of this paper is to explore how looping concept can be effectively embedded into corporate CSR strategy to foster sustainable long term growth. In general, the concept of a loop is a structure or process, the end of which is connected to the beginning, whereas the narrow view of a loop in business field means plan, do, check, and improve. In this sense, looping concept is a blend of balance and agility with the awareness to know when to which. Organisations can introduce similar pull mechanisms by formulating CSR strategies in order to perform the best plan of actions in real time, then a chance to change those actions, pushing them toward well-organized planning and successful performance. Through the analysis of an exploratory study, this paper demonstrates that approaching looping concept in the context of corporate CSR strategy is an important source of new idea to propel CSR practices by deepening basic understanding through the looping concept which is increasingly necessary to attract and retain business stakeholders include people such as employees, customers, suppliers and other communities for long-term business survival. This paper contributes to the literature by providing a fundamental explanation of how the organisations will experience less financial and reputation risk if looping concept logic is integrated into core business CSR strategy.The value of the paper rests in the treatment of looping concept as a corporate CSR strategy which demonstrates "looping concept implementation framework for CSR" that could further foster business sustainability, and help organisations move along the path from laggards to leaders.

Keywords: corporate social responsibility, looping concept, stakeholder theory, sustainable growth

Procedia PDF Downloads 400
459 The Effects of Shift Work on Neurobehavioral Performance: A Meta Analysis

Authors: Thomas Vlasak, Tanja Dujlociv, Alfred Barth

Abstract:

Shift work is an essential element of modern labor, ensuring ideal conditions of service for today’s economy and society. Despite the beneficial properties, its impact on the neurobehavioral performance of exposed subjects remains controversial. This meta-analysis aims to provide first summarizing the effects regarding the association between shift work exposure and different cognitive functions. A literature search was performed via the databases PubMed, PsyINFO, PsyARTICLES, MedLine, PsycNET and Scopus including eligible studies until December 2020 that compared shift workers with non-shift workers regarding neurobehavioral performance tests. A random-effects model was carried out using Hedge’s g as a meta-analytical effect size with a restricted likelihood estimator to summarize the mean differences between the exposure group and controls. The heterogeneity of effect sizes was addressed by a sensitivity analysis using funnel plots, egger’s tests, p-curve analysis, meta-regressions, and subgroup analysis. The meta-analysis included 18 studies resulting in a total sample of 18,802 participants and 37 effect sizes concerning six different neurobehavioral outcomes. The results showed significantly worse performance in shift workers compared to non-shift workers in the following cognitive functions with g (95% CI): processing speed 0.16 (0.02 - 0.30), working memory 0.28 (0.51 - 0.50), psychomotor vigilance 0.21 (0.05 - 0.37), cognitive control 0.86 (0.45 - 1.27) and visual attention 0.19 (0.11 - 0.26). Neither significant moderating effects of publication year or study quality nor significant subgroup differences regarding type of shift or type of profession were indicated for the cognitive outcomes. These are the first meta-analytical findings that associate shift work with decreased cognitive performance in processing speed, working memory, psychomotor vigilance, cognitive control, and visual attention. Further studies should focus on a more homogenous measurement of cognitive functions, a precise assessment of experience of shift work and occupation types which are underrepresented in the current literature (e.g., law enforcement). In occupations where shift work is fundamental (e.g., healthcare, industries, law enforcement), protective countermeasures should be promoted for workers.

Keywords: meta-analysis, neurobehavioral performance, occupational psychology, shift work

Procedia PDF Downloads 108
458 Ecology, Value-Form and Metabolic Rift: Conceptualizing the Environmental History of the Amazon in the Capitalist World-System (19th-20th centuries)

Authors: Santiago Silva de Andrade

Abstract:

In recent decades, Marx's ecological theory of the value-form and the theory of metabolic rift have represented fundamental methodological innovations for social scientists interested in environmental transformations and their relationships with the development of the capital system. However, among Latin American environmental historians, such theoretical and methodological instruments have been used infrequently and very cautiously. This investigation aims to demonstrate how the concepts of metabolic rift and ecological value-form are important for understanding the environmental, economic and social transformations in the Amazon region between the second half of the 19th century and the end of the 20th century. Such transformations manifested themselves mainly in two dimensions: the first concerns the link between the manufacture of tropical substances for export and scientific developments in the fields of botany, chemistry and agriculture. This link was constituted as a set of social, intellectual and economic relations that condition each other, configuring an asymmetrical field of exchanges and connections between the demands of the industrialized world - personified in scientists, naturalists, businesspeople and bureaucrats - and the agencies of local social actors, such as indigenous people, riverside dwellers and quilombolas; the second dimension concerns the imperative link between the historical development of the capitalist world-system and the restructuring of the natural world, its landscapes, biomes and social relations, notably in peripheral colonial areas. The environmental effects of capitalist globalization were not only seen in the degradation of exploited environments, although this has been, until today, its most immediate and noticeable aspect. There was also, in territories subject to the logic of market accumulation, the reformulation of patterns of authority and institutional architectures, such as property systems, political jurisdictions, rights and social contracts, as a result of the expansion of commodity frontiers between the 16th and 21st centuries. . This entire set of transformations produced impacts on the ecological landscape of the Amazon. This demonstrates the need to investigate the histories of local configurations of power, spatial and ecological - with their institutions and social actors - and their role in structuring the capitalist world-system , under the lens of the ecological theory of value-form and metabolic rift.

Keywords: amazon, ecology, form-value, metabolic rift

Procedia PDF Downloads 64
457 Antimicrobial Efficacy of Some Antibiotics Combinations Tested against Some Molecular Characterized Multiresistant Staphylococcus Clinical Isolates, in Egypt

Authors: Nourhan Hussein Fanaki, Hoda Mohamed Gamal El-Din Omar, Nihal Kadry Moussa, Eva Adel Edward Farid

Abstract:

The resistance of staphylococci to various antibiotics has become a major concern for health care professionals. The efficacy of the combinations of selected glycopeptides (vancomycin and teicoplanin) with gentamicin or rifampicin, as well as that of gentamicin/rifampicin combination, was studied against selected pathogenic staphylococcus isolated from Egypt. The molecular distribution of genes conferring resistance to these four antibiotics was detected among tested clinical isolates. Antibiotic combinations were studied using the checkerboard technique and the time-kill assay (in both the stationary and log phases). Induction of resistance to glycopeptides in staphylococci was tried in the absence and presence of diclofenac sodium as inducer. Transmission electron microscopy was used to study the effect of glycopeptides on the ultrastructure of the cell wall of staphylococci. Attempts were made to cure gentamicin resistance plasmids and to study the transfer of these plasmids by conjugation. Trials for the transformation of the successfully isolated gentamicin resistance plasmid to competent cells were carried out. The detection of genes conferring resistance to the tested antibiotics was performed using the polymerase chain reaction. The studied antibiotic combinations proved their efficacy, especially when tested during the log phase. Induction of resistance to glycopeptides in staphylococci was more promising in presence of diclofenac sodium, compared to its absence. Transmission electron microscopy revealed the thickening of bacterial cell wall in staphylococcus clinical isolates due to the presence of tested glycopeptides. Curing of gentamicin resistance plasmids was only successful in 2 out of 9 tested isolates, with a curing rate of 1 percent for each. Both isolates, when used as donors in conjugation experiments, yielded promising conjugation frequencies ranging between 5.4 X 10-2 and 7.48 X 10-2 colony forming unit/donor cells. Plasmid isolation was only successful in one out of the two tested isolates. However, low transformation efficiency (59.7 transformants/microgram plasmid DNA) of such plasmids was obtained. Negative regulators of autolysis, such as arlR, lytR and lrgB, as well as cell-wall associated genes, such as pbp4 and/or pbp2, were detected in staphylococcus isolates with reduced susceptibility to the tested glycopeptides. Concerning rifampicin resistance genes, rpoBstaph was detected in 75 percent of the tested staphylococcus isolates. It could be concluded that in vitro studies emphasized the usefulness of the combination of vancomycin or teicoplanin with gentamicin or rifampicin, as well as that of gentamicin with rifampicin, against staphylococci showing varying resistance patterns. However, further in vivo studies are required to ensure the safety and efficacy of such combinations. Diclofenac sodium can act as an inducer of resistance to glycopeptides in staphylococci. Cell-wall thickness is a major contributor to such resistance among them. Gentamicin resistance in these strains could be chromosomally or plasmid mediated. Multiple mutations in the rpoB gene could mediate staphylococcus resistance to rifampicin.

Keywords: glycopeptides, combinations, induction, diclofenac, transmission electron microscopy, polymerase chain reaction

Procedia PDF Downloads 292
456 Inputs and Outputs of Innovation Processes in the Colombian Services Sector

Authors: Álvaro Turriago-Hoyos

Abstract:

Most research tends to see innovation as an explanatory factor in achieving high levels of competitiveness and productivity. More recent studies have begun to analyze the determinants of innovation in the services sector as opposed to the much-discussed industrial sector of a country’s economy. This research paper focuses on the services sector in Colombia, one of Latin America’s fastest growing and biggest economies. Over the past decade, much of Colombia’s economic expansion has relied on commodity exports (mainly oil and coffee) whilst the industrial sector has performed relatively poorly. Such developments highlight the potential of the innovative role played by the services sector of the Colombian economy and its future growth prospects. This research paper analyzes the relationship between inputs, which at the same time are internal sources of innovation (such as R&D activities), and external sources that are improved by technology acquisition. The outputs are basically the four kinds of innovation that the OECD Oslo Manual recognizes: product, process, marketing and organizational innovations. The instrument used to measure this input-output relationship is based on Knowledge Production Function approaches. We run Probit models in order to identify the existing relationships between the above inputs and outputs, but also to identify spill-overs derived from interactions of the components of the value chain of the services firms analyzed: customers, suppliers, competitors, and complementary firms. Data are obtained from the Colombian National Administrative Department of Statistics for the period 2008 to 2013 published in the II and III Colombian National Innovation Survey. A short summary of the results obtained lead to conclude that firm size and a firm’s level of technological development turn out to be important discriminating factors for the description of the innovative process at the firm level. The model’s outcomes show a positive impact on the probability of introducing any kind of innovation both on R&D and Technology Acquisition investment. Also, cooperation agreements with customers, research institutes, competitors, and the suppliers are significant. Belonging to a particular industrial group is an important determinant but only to product and organizational innovation. It is possible to establish that Health Services, Education, Computer, Wholesale trade, and Financial Intermediation are the ISIC sectors, which report the highest number of frequencies of the considered set of firms. Those five sectors of the sixteen considered, in all cases, explained more than half of the total of all kinds of innovations. Product Innovation, which is followed by Marketing Innovation, gets the highest results. Displaying the same set of firms distinguishing by size, and belonging to high and low tech services sector shows that the larger the firms the larger a number of innovations, but also that always high-tech firms show a better innovation performance.

Keywords: Colombia, determinants of innovation, innovation, services sector

Procedia PDF Downloads 267
455 Evaluation of Cryoablation Procedures in Treatment of Atrial Fibrillation from 3 Years' Experiences in a Single Heart Center

Authors: J. Yan, B. Pieper, B. Bucsky, B. Nasseri, S. Klotz, H. H. Sievers, S. Mohamed

Abstract:

Cryoablation is evermore applied for interventional treatment of paroxysmal (PAAF) or persistent atrial fibrillation (PEAF). In the cardiac surgery, this procedure is often combined with coronary arterial bypass graft (CABG) and valve operations. Three different methods are feasible in this sense in respect to practicing extents and mechanisms such as lone left atrial cryoablation, Cox-Maze IV and III in our heart center. 415 patients (68 ± 0.8ys, male 68.2%) with predisposed atrial fibrillation who initially required either coronary or valve operations were enrolled and divided into 3 matched groups according to deployed procedures: CryoLA-group (cryoablation of lone left atrium, n=94); Cox-Maze-IV-group (n=93) and Cox-Maze-III-group (n=8). All patients additionally received closure of the left atrial appendage (LAA) and regularly underwent three-year ambulant follow-up assessments (3, 6, 9, 12, 18, 24, 30 and 36 months). Burdens of atrial fibrillation were assessed directly by means of cardiac monitor (Reveal XT, Medtronic) or of 3-day Holter electrocardiogram. Herewith, attacks frequencies of AF and their circadian patterns were systemically analyzed. Furthermore, anticoagulants and regular rate-/rhythm-controlling medications were evaluated and listed in terms of anti-rate and anti-rhythm regimens. Concerning PAAF treatment, Cox Maze IV procedure provided therapeutically acceptable effect as lone left atrium (LA) cryoablation did (5.25 ± 5.25% vs. 10.39 ± 9.96% AF-burden, p > 0.05). Interestingly, Cox Maze III method presented a better short-term effect in the PEAF therapy in comparison to lone cryoablation of LA and Cox Maze IV (0.25 ± 0.23% vs. 15.31 ± 5.99% and 9.10 ± 3.73% AF-burden within the first year, p < 0.05). But this therapeutic advantage went lost during ongoing follow-ups (26.65 ± 24.50% vs. 8.33 ± 8.06% and 15.73 ± 5.88% in 3rd follow-up year). In this way, lone LA-cryoablation established its antiarrhythmic efficacy and 69.5% patients were released from the Vit-K-antagonists, while Cox Maze IV liberated 67.2% patients from continuous anticoagulant medication. The AF-recurrences mostly performed such attacks property as less than 60min duration for all 3 procedures (p > 0.05). In the sense of the circadian distribution of the recurrence attacks, weighted by ongoing follow-ups, lone LA cryoablation achieved and stabilized the antiarrhythmic effects over time, which was especially observed in the treatment of PEAF, while Cox Maze IV and III had their antiarrhythmic effects weakened progressively. This phenomenon was likewise evaluable in the therapy of circadian rhythm of reverting AF-attacks. Furthermore, the strategy of rate control was much more often applied to support and maintain therapeutic successes obtained than the one of rhythm control. Derived from experiences in our heart center, lone LA cryoablation presented equivalent effects in the treatment of AF in comparison to Cox Maze IV and III procedures. These therapeutic successes were especially investigable in the patients suffering from persistent AF (PEAF). Additional supportive strategies such as rate control regime should be initialized and implemented to improve the therapeutic effects of the cryoablations according to appropriate criteria.

Keywords: AF-burden, atrial fibrillation, cardiac monitor, COX MAZE, cryoablation, Holter, LAA

Procedia PDF Downloads 204
454 Controlled Digital Lending, Equitable Access to Knowledge and Future Library Services

Authors: Xuan Pang, Alvin L. Lee, Peggy Glatthaar

Abstract:

Libraries across the world have been an innovation engine of creativity and opportunityin many decades. The on-going global epidemiology outbreak and health crisis experience illuminates potential reforms, rethinking beyond traditional library operations and services. Controlled Digital Lending (CDL) is one of the emerging technologies libraries used to deliver information digitally in support of online learning and teachingand make educational materials more affordable and more accessible. CDL became a popular term in the United States of America (USA) as a result of a white paper authored by Kyle K. Courtney (Harvard University) and David Hansen (Duke University). The paper gave the legal groundwork to explore CDL: Fair Use, First Sale Doctrine, and Supreme Court rulings. Library professionals implemented this new technology to fulfill their users’ needs. Three libraries in the state of Florida (University of Florida, Florida Gulf Coast University, and Florida A&M University) started a conversation about how to develop strategies to make CDL work possible at each institution. This paper shares the stories of piloting and initiating a CDL program to ensure students have reliable, affordable access to course materials they need to be successful. Additionally, this paper offers an overview of the emerging trends of Controlled Digital Lending in the USA and demonstrates the development of the CDL platforms, policies, and implementation plans. The paper further discusses challenges and lessons learned and how each institution plans to sustain the program into future library services. The fundamental mission of the library is providing users unrestricted access to library resources regardless of their physical location, disability, health status, or other circumstances. The professional due diligence of librarians, as information professionals, is to makeeducational resources more affordable and accessible.CDL opens a new frontier of library services as a mechanism for library practice to enhance user’s experience of using libraries’ services. Libraries should consider exploring this tool to distribute library resources in an effective and equitable way. This new methodology has potential benefits to libraries and end users.

Keywords: controlled digital lending, emerging technologies, equitable access, collaborations

Procedia PDF Downloads 135
453 Multi-Omics Integrative Analysis Coupled to Control Theory and Computational Simulation of a Genome-Scale Metabolic Model Reveal Controlling Biological Switches in Human Astrocytes under Palmitic Acid-Induced Lipotoxicity

Authors: Janneth Gonzalez, Andrés Pinzon Velasco, Maria Angarita

Abstract:

Astrocytes play an important role in various processes in the brain, including pathological conditions such as neurodegenerative diseases. Recent studies have shown that the increase in saturated fatty acids such as palmitic acid (PA) triggers pro-inflammatorypathways in the brain. The use of synthetic neurosteroids such as tibolone has demonstrated neuro-protective mechanisms. However, broad studies with a systemic point of view on the neurodegenerative role of PA and the neuro-protective mechanisms of tibolone are lacking. In this study, we performed the integration of multi-omic data (transcriptome and proteome) into a human astrocyte genomic scale metabolic model to study the astrocytic response during palmitate treatment. We evaluated metabolic fluxes in three scenarios (healthy, induced inflammation by PA, and tibolone treatment under PA inflammation). We also applied a control theory approach to identify those reactions that exert more control in the astrocytic system. Our results suggest that PA generates a modulation of central and secondary metabolism, showing a switch in energy source use through inhibition of folate cycle and fatty acid β‐oxidation and upregulation of ketone bodies formation. We found 25 metabolic switches under PA‐mediated cellular regulation, 9 of which were critical only in the inflammatory scenario but not in the protective tibolone one. Within these reactions, inhibitory, total, and directional coupling profiles were key findings, playing a fundamental role in the (de)regulation of metabolic pathways that may increase neurotoxicity and represent potential treatment targets. Finally, the overall framework of our approach facilitates the understanding of complex metabolic regulation, and it can be used for in silico exploration of the mechanisms of astrocytic cell regulation, directing a more complex future experimental work in neurodegenerative diseases.

Keywords: astrocytes, data integration, palmitic acid, computational model, multi-omics

Procedia PDF Downloads 97
452 Electromagnetic Modeling of a MESFET Transistor Using the Moments Method Combined with Generalised Equivalent Circuit Method

Authors: Takoua Soltani, Imen Soltani, Taoufik Aguili

Abstract:

The communications' and radar systems' demands give rise to new developments in the domain of active integrated antennas (AIA) and arrays. The main advantages of AIA arrays are the simplicity of fabrication, low cost of manufacturing, and the combination between free space power and the scanner without a phase shifter. The integrated active antenna modeling is the coupling between the electromagnetic model and the transport model that will be affected in the high frequencies. Global modeling of active circuits is important for simulating EM coupling, interaction between active devices and the EM waves, and the effects of EM radiation on active and passive components. The current review focuses on the modeling of the active element which is a MESFET transistor immersed in a rectangular waveguide. The proposed EM analysis is based on the Method of Moments combined with the Generalised Equivalent Circuit method (MOM-GEC). The Method of Moments which is the most common and powerful software as numerical techniques have been used in resolving the electromagnetic problems. In the class of numerical techniques, MOM is the dominant technique in solving of Maxwell and Transport’s integral equations for an active integrated antenna. In this situation, the equivalent circuit is introduced to the development of an integral method formulation based on the transposition of field problems in a Generalised equivalent circuit that is simpler to treat. The method of Generalised Equivalent Circuit (MGEC) was suggested in order to represent integral equations circuits that describe the unknown electromagnetic boundary conditions. The equivalent circuit presents a true electric image of the studied structures for describing the discontinuity and its environment. The aim of our developed method is to investigate the antenna parameters such as the input impedance and the current density distribution and the electric field distribution. In this work, we propose a global EM modeling of the MESFET AsGa transistor using an integral method. We will begin by describing the modeling structure that allows defining an equivalent EM scheme translating the electromagnetic equations considered. Secondly, the projection of these equations on common-type test functions leads to a linear matrix equation where the unknown variable represents the amplitudes of the current density. Solving this equation resulted in providing the input impedance, the distribution of the current density and the electric field distribution. From electromagnetic calculations, we were able to present the convergence of input impedance for different test function number as a function of the guide mode numbers. This paper presents a pilot study to find the answer to map out the variation of the existing current evaluated by the MOM-GEC. The essential improvement of our method is reducing computing time and memory requirements in order to provide a sufficient global model of the MESFET transistor.

Keywords: active integrated antenna, current density, input impedance, MESFET transistor, MOM-GEC method

Procedia PDF Downloads 198
451 Waste Management in a Hot Laboratory of Japan Atomic Energy Agency – 1: Overview and Activities in Chemical Processing Facility

Authors: Kazunori Nomura, Hiromichi Ogi, Masaumi Nakahara, Sou Watanabe, Atsuhiro Shibata

Abstract:

Chemical Processing Facility of Japan Atomic Energy Agency is a basic research field for advanced back-end technology developments with using actual high-level radioactive materials such as irradiated fuels from the fast reactor, high-level liquid waste from reprocessing plant. In the nature of a research facility, various kinds of chemical reagents have been offered for fundamental tests. Most of them were treated properly and stored in the liquid waste vessel equipped in the facility, but some were not treated and remained at the experimental space as a kind of legacy waste. It is required to treat the waste in safety. On the other hand, we formulated the Medium- and Long-Term Management Plan of Japan Atomic Energy Agency Facilities. This comprehensive plan considers Chemical Processing Facility as one of the facilities to be decommissioned. Even if the plan is executed, treatment of the “legacy” waste beforehand must be a necessary step for decommissioning operation. Under this circumstance, we launched a collaborative research project called the STRAD project, which stands for Systematic Treatment of Radioactive liquid waste for Decommissioning, in order to develop the treatment processes for wastes of the nuclear research facility. In this project, decomposition methods of chemicals causing a troublesome phenomenon such as corrosion and explosion have been developed and there is a prospect of their decomposition in the facility by simple method. And solidification of aqueous or organic liquid wastes after the decomposition has been studied by adding cement or coagulants. Furthermore, we treated experimental tools of various materials with making an effort to stabilize and to compact them before the package into the waste container. It is expected to decrease the number of transportation of the solid waste and widen the operation space. Some achievements of these studies will be shown in this paper. The project is expected to contribute beneficial waste management outcome that can be shared world widely.

Keywords: chemical processing facility, medium- and long-term management plan of JAEA facilities, STRAD project, treatment of radioactive waste

Procedia PDF Downloads 142
450 A Risk-Based Approach to Construction Management

Authors: Chloe E. Edwards, Yasaman Shahtaheri

Abstract:

Risk management plays a fundamental role in project planning and delivery. The purpose of incorporating risk management into project management practices is to identify and address uncertainties related to key project-related activities. The uncertainties, known as risk events, can relate to project deliverables that are quantifiable and are often measured by impact to project schedule, cost, or environmental impact. Risk management should be incorporated as an iterative practice throughout the planning, execution, and commissioning phases of a project. This paper specifically examines how risk management contributes to effective project planning and delivery through a case study of a transportation project. This case study focused solely on impacts to project schedule regarding three milestones: readiness for delivery, readiness for testing and commissioning, and completion of the facility. The case study followed the ISO 31000: Risk Management – Guidelines. The key factors that are outlined by these guidelines include understanding the scope and context of the project, conducting a risk assessment including identification, analysis, and evaluation, and lastly, risk treatment through mitigation measures. This process requires continuous consultation with subject matter experts and monitoring to iteratively update the risks accordingly. The risk identification process led to a total of fourteen risks related to design, permitting, construction, and commissioning. The analysis involved running 1,000 Monte Carlo simulations through @RISK 8.0 Industrial software to determine potential milestone completion dates based on the project baseline schedule. These dates include the best case, most likely case, and worst case to provide an estimated delay for each milestone. Evaluation of these results provided insight into which risks were the highest contributors to the projected milestone completion dates. Based on the analysis results, the risk management team was able to provide recommendations for mitigation measures to reduce the likelihood of risks occurring. The risk management team also provided recommendations for managing the identified risks and project activities moving forward to meet the most likely or best-case milestone completion dates.

Keywords: construction management, monte carlo simulation, project delivery, risk assessment, transportation engineering

Procedia PDF Downloads 107
449 Time to Second Line Treatment Initiation Among Drug-Resistant Tuberculosis Patients in Nepal

Authors: Shraddha Acharya, Sharad Kumar Sharma, Ratna Bhattarai, Bhagwan Maharjan, Deepak Dahal, Serpahine Kaminsa

Abstract:

Background: Drug-resistant (DR) tuberculosis (TB) continues to be a threat in Nepal, with an estimated 2800 new cases every year. The treatment of DR-TB with second line TB drugs is complex and takes longer time with comparatively lower treatment success rate than drug-susceptible TB. Delay in treatment initiation for DR-TB patients might further result in unfavorable treatment outcomes and increased transmission. This study thus aims to determine median time taken to initiate second-line treatment among Rifampicin Resistant (RR) diagnosed TB patients and to assess the proportion of treatment delays among various type of DR-TB cases. Method: A retrospective cohort study was done using national routine electronic data (DRTB and TB Laboratory Patient Tracking System-DHIS2) on drug resistant tuberculosis patients between January 2020 and December 2022. The time taken for treatment initiation was computed as– days from first diagnosis as RR TB through Xpert MTB/Rif test to enrollment on second-line treatment. The treatment delay (>7 days after diagnosis) was calculated. Results: Among total RR TB cases (N=954) diagnosed via Xpert nationwide, 61.4% were enrolled under shorter-treatment regimen (STR), 33.0% under longer treatment regimen (LTR), 5.1% for Pre-extensively drug resistant TB (Pre-XDR) and 0.4% for Extensively drug resistant TB (XDR) treatment. Among these cases, it was found that the median time from diagnosis to treatment initiation was 6 days (IQR:2-15.8). The median time was 5 days (IQR:2.0-13.3) among STR, 6 days (IQR:3.0-15.0) among LTR, 30 days (IQR:5.5-66.8) among Pre-XDR and 4 days (IQR:2.5-9.0) among XDR TB cases. The overall treatment delay (>7 days after diagnosis) was observed in 42.4% of the patients, among which, cases enrolled under Pre-XDR contributed substantially to treatment delay (72.0%), followed by LTR (43.6%), STR (39.1%) and XDR (33.3%). Conclusion: Timely diagnosis and prompt treatment initiation remain fundamental focus of the National TB program. The findings of the study, however suggest gaps in timeliness of treatment initiation for the drug-resistant TB patients, which could bring adverse treatment outcomes. Moreover, there is an alarming delay in second line treatment initiation for the Pre-XDR TB patients. Therefore, this study generates evidence to identify existing gaps in treatment initiation and highlights need for formulating specific policies and intervention in creating effective linkage between the RR TB diagnosis and enrollment on second line TB treatment with intensified efforts from health providers for follow-ups and expansion of more decentralized, adequate, and accessible diagnostic and treatment services for DR-TB, especially Pre-XDR TB cases, due to the observed long treatment delays.

Keywords: drug-resistant, tuberculosis, treatment initiation, Nepal, treatment delay

Procedia PDF Downloads 85
448 Magnetic Biomaterials for Removing Organic Pollutants from Wastewater

Authors: L. Obeid, A. Bee, D. Talbot, S. Abramson, M. Welschbillig

Abstract:

The adsorption process is one of the most efficient methods to remove pollutants from wastewater provided that suitable adsorbents are used. In order to produce environmentally safe adsorbents, natural polymers have received increasing attention in recent years. Thus, alginate and chitosane are extensively used as inexpensive, non-toxic and efficient biosorbents. Alginate is an anionic polysaccharide extracted from brown seaweeds. Chitosan is an amino-polysaccharide; this cationic polymer is obtained by deacetylation of chitin the major constituent of crustaceans. Furthermore, it has been shown that the encapsulation of magnetic materials in alginate and chitosan beads facilitates their recovery from wastewater after the adsorption step, by the use of an external magnetic field gradient, obtained with a magnet or an electromagnet. In the present work, we have studied the adsorption affinity of magnetic alginate beads and magnetic chitosan beads (called magsorbents) for methyl orange (MO) (an anionic dye), methylene blue (MB) (a cationic dye) and p-nitrophenol (PNP) (a hydrophobic pollutant). The effect of different parameters (pH solution, contact time, pollutant initial concentration…) on the adsorption of pollutant on the magnetic beads was investigated. The adsorption of anionic and cationic pollutants is mainly due to electrostatic interactions. Consequently methyl orange is highly adsorbed by chitosan beads in acidic medium and methylene blue by alginate beads in basic medium. In the case of a hydrophobic pollutant, which is weakly adsorbed, we have shown that the adsorption is enhanced by adding a surfactant. Cetylpyridinium chloride (CPC), a cationic surfactant, was used to increase the adsorption of PNP by magnetic alginate beads. Adsorption of CPC by alginate beads occurs through two mechanisms: (i) electrostatic attractions between cationic head groups of CPC and negative carboxylate functions of alginate; (ii) interaction between the hydrocarbon chains of CPC. The hydrophobic pollutant is adsolubilized within the surface aggregated structures of surfactant. Figure c shows that PNP can reach up to 95% of adsorption in presence of CPC. At highest CPC concentrations, desorption occurs due to the formation of micelles in the solution. Our magsorbents appear to efficiently remove ionic and hydrophobic pollutants and we hope that this fundamental research will be helpful for the future development of magnetically assisted processes in water treatment plants.

Keywords: adsorption, alginate, chitosan, magsorbent, magnetic, organic pollutant

Procedia PDF Downloads 257
447 Understanding the Processwise Entropy Framework in a Heat-powered Cooling Cycle

Authors: P. R. Chauhan, S. K. Tyagi

Abstract:

Adsorption refrigeration technology offers a sustainable and energy-efficient cooling alternative over traditional refrigeration technologies for meeting the fast-growing cooling demands. With its ability to utilize natural refrigerants, low-grade heat sources, and modular configurations, it has the potential to revolutionize the cooling industry. Despite these benefits, the commercial viability of this technology is hampered by several fundamental limiting constraints, including its large size, low uptake capacity, and poor performance as a result of deficient heat and mass transfer characteristics. The primary cause of adequate heat and mass transfer characteristics and magnitude of exergy loss in various real processes of adsorption cooling system can be assessed by the entropy generation rate analysis, i. e. Second law of Thermodynamics. Therefore, this article presents the second law of thermodynamic-based investigation in terms of entropy generation rate (EGR) to identify the energy losses in various processes of the HPCC-based adsorption system using MATLAB R2021b software. The adsorption technology-based cooling system consists of two beds made up of silica gel and arranged in a single stage, while the water is employed as a refrigerant, coolant, and hot fluid. The variation in process-wise EGR is examined corresponding to cycle time, and a comparative analysis is also presented. Moreover, the EGR is also evaluated in the external units, such as the heat source and heat sink unit used for regeneration and heat dump, respectively. The research findings revealed that the combination of adsorber and desorber, which operates across heat reservoirs with a higher temperature gradient, shares more than half of the total amount of EGR. Moreover, the EGR caused by the heat transfer process is determined to be the highest, followed by a heat sink, heat source, and mass transfer, respectively. in case of heat transfer process, the operation of the valve is determined to be responsible for more than half (54.9%) of the overall EGR during the heat transfer. However, the combined contribution of the external units, such as the source (18.03%) and sink (21.55%), to the total EGR, is 35.59%. The analysis and findings of the present research are expected to pinpoint the source of the energy waste in HPCC based adsorption cooling systems.

Keywords: adsorption cooling cycle, heat transfer, mass transfer, entropy generation, silica gel-water

Procedia PDF Downloads 107
446 Experimental Investigation of Seawater Thermophysical Properties: Understanding Climate Change Impacts on Marine Ecosystems Through Internal Pressure and Cohesion Energy Analysis

Authors: Nishaben Dholakiya, Anirban Roy, Ranjan Dey

Abstract:

The unprecedented rise in global temperatures has triggered complex changes in marine ecosystems, necessitating a deeper understanding of seawater's thermophysical properties by experimentally measuring ultrasonic velocity and density at varying temperatures and salinity. This study investigates the critical relationship between temperature variations and molecular-level interactions in Arabian Sea surface waters, specifically focusing on internal pressure (π) and cohesion energy density (CED) as key indicators of ecosystem disruption. Our experimental findings reveal that elevated temperatures significantly reduce internal pressure, weakening the intermolecular forces that maintain seawater's structural integrity. This reduction in π correlates directly with decreased habitat stability for marine organisms, particularly affecting pressure-sensitive species and their physiological processes. Similarly, the observed decline in cohesion energy density at higher temperatures indicates a fundamental shift in water molecule organization, impacting the dissolution and distribution of vital nutrients and gases. These molecular-level changes cascade through the ecosystem, affecting everything from planktonic organisms to complex food webs. By employing advanced machine learning techniques, including Stacked Ensemble Machine Learning (SEML) and AdaBoost (AB), we developed highly accurate predictive models (>99% accuracy) for these thermophysical parameters. The results provide crucial insights into the mechanistic relationship between climate warming and marine ecosystem degradation, offering valuable data for environmental policymaking and conservation strategies. The novelty of this research serves as no such thermodynamic investigation has been conducted before in literature, whereas this research establishes a quantitative framework for understanding how molecular-level changes in seawater properties directly influence marine ecosystem stability, emphasizing the urgent need for climate change mitigation efforts.

Keywords: thermophysical properties, Arabian Sea, internal pressure, cohesion energy density, machine learning

Procedia PDF Downloads 3
445 Coupled Field Formulation – A Unified Method for Formulating Structural Mechanics Problems

Authors: Ramprasad Srinivasan

Abstract:

Engineers create inventions and put their ideas in concrete terms to design new products. Design drivers must be established, which requires, among other things, a complete understanding of the product design, load paths, etc. For Aerospace Vehicles, weight/strength ratio, strength, stiffness and stability are the important design drivers. A complex built-up structure is made up of an assemblage of primitive structural forms of arbitrary shape, which include 1D structures like beams and frames, 2D structures like membranes, plate and shell structures, and 3D solid structures. Justification through simulation involves a check for all the quantities of interest, namely stresses, deformation, frequencies, and buckling loads and is normally achieved through the finite element (FE) method. Over the past few decades, Fiber-reinforced composites are fast replacing the traditional metallic structures in the weight-sensitive aerospace and aircraft industries due to their high specific strength, high specific stiffness, anisotropic properties, design freedom for tailoring etc. Composite panel constructions are used in aircraft to design primary structure components like wings, empennage, ailerons, etc., while thin-walled composite beams (TWCB) are used to model slender structures like stiffened panels, helicopter, and wind turbine rotor blades, etc. The TWCB demonstrates many non-classical effects like torsional and constrained warping, transverse shear, coupling effects, heterogeneity, etc., which makes the analysis of composite structures far more complex. Conventional FE formulations to model 1D structures suffer from many limitations like shear locking, particularly in slender beams, lower convergence rates due to material coupling in composites, inability to satisfy, equilibrium in the domain and natural boundary conditions (NBC) etc. For 2D structures, the limitations of conventional displacement-based FE formulations include the inability to satisfy NBC explicitly and many pathological problems such as shear and membrane locking, spurious modes, stress oscillations, lower convergence due to mesh distortion etc. This mandates frequent re-meshing to even achieve an acceptable mesh (satisfy stringent quality metrics) for analysis leading to significant cycle time. Besides, currently, there is a need for separate formulations (u/p) to model incompressible materials, and a single unified formulation is missing in the literature. Hence coupled field formulation (CFF) is a unified formulation proposed by the author for the solution of complex 1D and 2D structures addressing the gaps in the literature mentioned above. The salient features of CFF and its many advantages over other conventional methods shall be presented in this paper.

Keywords: coupled field formulation, kinematic and material coupling, natural boundary condition, locking free formulation

Procedia PDF Downloads 66
444 Tapping Traditional Environmental Knowledge: Lessons for Disaster Policy Formulation in India

Authors: Aparna Sengupta

Abstract:

The paper seeks to find answers to the question as to why India’s disaster management policies have been unable to deliver the desired results. Are the shortcomings in policy formulation, effective policy implementation or timely prevention mechanisms? Or is there a fundamental issue of policy formulation which sparsely takes into account the cultural specificities and uniqueness, technological know-how, educational, religious and attitudinal capacities of the target population into consideration? India was slow in legislating disaster policies but more than that the reason for lesser success of disaster polices seems to be the gap between policy and the people. We not only keep hearing about the failure of governmental efforts but also how the local communities deal far more efficaciously with disasters utilizing their traditional knowledge. The 2004 Indian Ocean tsunami which killed 250,000 people (approx.) could not kill the tribal communities who saved themselves due to their age-old traditional knowledge. This large scale disaster, considered as a landmark event in history of disasters in the twenty-first century, can be attributed in bringing and confirming the importance of Traditional Environmental Knowledge in managing disasters. This brings forth the importance of cultural and traditional know-how in dealing with natural disasters and one is forced to question as to why shouldn’t traditional environmental knowledge (TEK) be taken into consideration while formulating India’s disaster resilience policies? Though at the international level, many scholars have explored the connectedness of disaster to cultural dimensions and several research examined how culture acts as a stimuli in perceiving disasters and their management (Clifford, 1956; Mcluckie, 1970; Koentjaraningrat, 1985; Peacock, 1997; Elliot et.al, 2006; Aruntoi, 2008; Kulatunga, 2010). But in the Indian context, this field of inquiry i.e. linking disaster policies with tradition and generational understanding has seldom received attention of the government, decision- making authorities, disaster managers and even in the academia. The present study attempts to fill this gap in research and scholarship by presenting an historical analysis of disaster and its cognition by cultural communities in India. The paper seeks to interlink the cultural comprehension of Indian tribal communities with scientific-technology towards more constructive disaster policies in India.

Keywords: culture, disasters, local communities, traditional knowledge

Procedia PDF Downloads 105