Search results for: Molecular modeling of 17-picolyl and 17-picolinylidene androstane derivatives with anticancer activity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11770

Search results for: Molecular modeling of 17-picolyl and 17-picolinylidene androstane derivatives with anticancer activity

610 The Influence of Gender on Itraconazole Pharmacokinetic Parameters in Healthy Adults

Authors: Milijana N. Miljkovic, Viktorija M. Dragojevic-Simic, Nemanja K. Rancic, Vesna M. Jacevic, Snezana B. Djordjevic, Momir M. Mikov, Aleksandra M. Kovacevic

Abstract:

Itraconazole (ITZ) is a weak base and extremely lipophilic compound, with water solubility as a rate-limiting step in its absorption from the gastrointestinal tract. Its absolute bioavailability, about 55%, is maximal when its oral formulation, capsules, are taken immediately after a full meal. Peak plasma concentrations (Cmax) are reached within 2 to 5 hrs after their administration. ITZ undergoes extensive hepatic metabolism by human CYP3A4 isoenzyme and more than 30 different metabolites have been identified. One of the main ones is hydroxyitraconazole (HITZ), in which plasma concentrations are almost twice higher than those of ITZ. Gender differences in drug PK (Pharmacokinetics) have already been recognized, but variations in metabolism are believed to be their major cause. The aim of the study was to investigate the influence of gender on ITZ PK parameters after administration of oral capsule formulation, following 100 mg single dosing in healthy adult volunteers under fed conditions. The single-center, open-label PK study was performed. PK analyses included PK parameters obtained after a single 100 mg dose administration of itraconazole capsules to 48 females and 66 males. Blood samples were collected at pre-dose and up to 72.0 h after administration (1.0, 2.0, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 7.0, 9.0, 12.0, 24.0, 36.0 and 72.0 hrs). The calculated pharmacokinetic parameters, based on the plasma concentrations of itraconazole and hydroxyitraconazole, were Cmax, AUClast, and AUCtot. Plasma concentrations of ITZ and HITZ were determined using a validated liquid chromatographic method with mass spectrometric detection, while pharmacokinetic parameters were estimated using non-compartmental methods. The pharmacokinetic analyses were performed using Kinetica software version 5.0. The mean value of ITZ Cmaxmen was 74.79 ng/ml, and Cmaxwomen was 51.291 ng/ml (independent samples test; p = 0.005). Hydroxyitraconazole had a mean value of Cmaxmen 106.37 ng/ml, and the mean value Cmaxwomen was 70.05 ng/ml. Women had, on average, lower AUClast and Cmax than men. AUClastmen for ITZ was 736.02 ng/mL*h and AUClastwomen was 566.62 ng/mL*h, while AUClastmen for HITZ was 1154.80 was ng/mL*h and AUClastwomen for HITZ was 708.12 ng/mL*h (independent samples test; p = 0.033). The mean values of ITZ AUCtotmen were 884.73 ng/mL*h and AUCtotwomen was 685.10 ng/mL*h. AUCtotmen for HITZ was 1290.41 ng/mL*h, while AUCtotwomen for HIZT was 788.60 ng/mL*h (p < 0.001). The results could point out to lower oral bioavailability of ITZ in women, since values of Cmax, AUClast, and AUCtot of both ITZ and HITZ were significantly lower in women than in men, respectively. The reason may be higher expression and activity of CYP3A4 in women than in men, but there also may be differences in other PK parameters. High variability of both ITZ and HITZ concentrations in both genders confirmed that ITZ is a highly variable drug. Further examinations of its PK are needed to justify strategies for therapeutic drug monitoring in patients treated by this antifungal agent.

Keywords: itraconazole, gender, hydroxyitraconazole, pharmacokinetics

Procedia PDF Downloads 119
609 Phytochemistry and Alpha-Amylase Inhibitory Activities of Rauvolfia vomitoria (Afzel) Leaves and Picralima nitida (Stapf) Seeds

Authors: Oseyemi Omowunmi Olubomehin, Olufemi Michael Denton

Abstract:

Diabetes mellitus is a disease that is related to the digestion of carbohydrates, proteins and fats and how this affects the blood glucose levels. Various synthetic drugs employed in the management of the disease work through different mechanisms. Keeping postprandial blood glucose levels within acceptable range is a major factor in the management of type 2 diabetes and its complications. Thus, the inhibition of carbohydrate-hydrolyzing enzymes such as α-amylase is an important strategy in lowering postprandial blood glucose levels, but synthetic inhibitors have undesirable side effects like flatulence, diarrhea, gastrointestinal disorders to mention a few. Therefore, it is necessary to identify and explore the α-amylase inhibitors from plants due to their availability, safety, and low costs. In the present study, extracts from the leaves of Rauvolfia vomitoria and seeds of Picralima nitida which are used in the Nigeria traditional system of medicine to treat diabetes were tested for their α-amylase inhibitory effect. The powdered plant samples were subjected to phytochemical screening using standard procedures. The leaves and seeds macerated successively using n-hexane, ethyl acetate and methanol resulted in the crude extracts which at different concentrations (0.1, 0.5 and 1 mg/mL) alongside the standard drug acarbose, were subjected to α-amylase inhibitory assay using the Benfield and Miller methods, with slight modification. Statistical analysis was done using ANOVA, SPSS version 2.0. The phytochemical screening results of the leaves of Rauvolfia vomitoria and the seeds of Picralima nitida showed the presence of alkaloids, tannins, saponins and cardiac glycosides while in addition Rauvolfia vomitoria had phenols and Picralima nitida had terpenoids. The α-amylase assay results revealed that at 1 mg/mL the methanol, hexane, and ethyl acetate extracts of the leaves of Rauvolfia vomitoria gave (15.74, 23.13 and 26.36 %) α-amylase inhibitions respectively, the seeds of Picralima nitida gave (15.50, 30.68, 36.72 %) inhibitions which were not significantly different from the control at p < 0.05, while acarbose gave a significant 56 % inhibition at p < 0.05. The presence of alkaloids, phenols, tannins, steroids, saponins, cardiac glycosides and terpenoids in these plants are responsible for the observed anti-diabetic activity. However, the low percentages of α-amylase inhibition by these plant samples shows that α-amylase inhibition is not the major way by which both plants exhibit their anti-diabetic effect.

Keywords: alpha-amylase, Picralima nitida, postprandial hyperglycemia, Rauvolfia vomitoria

Procedia PDF Downloads 169
608 Aesthetics and Semiotics in Theatre Performance

Authors: Păcurar Diana Istina

Abstract:

Structured in three chapters, the article attempts an X-ray of the theatrical aesthetics, correctly understood through the emotions generated in the intimate structure of the spectator that precedes the triggering of the viewer’s perception and not through the superposition, unfortunately common, of the notion of aesthetics with the style in which a theater show is built. The first chapter contains a brief history of the appearance of the word aesthetic, the formulation of definitions for this new term, as well as its connections with the notions of semiotics, in particular with the perception of the message transmitted. Starting with Aristotle and Plato, and reaching Magritte, their interventions should not be interpreted in the sense that the two scientific concepts can merge into one discipline. The perception that is the object of everyone’s analysis, the understanding of meaning, the decoding of the messages sent, and the triggering of feelings that culminate in pleasure, shaping the aesthetic vision, are some elements that keep semiotics and aesthetics distinct, even though they share many methods of analysis. The compositional processes of aesthetic representation and symbolic formation are analyzed in the second part of the paper from perspectives that include or do not include historical, cultural, social, and political processes. Aesthetics and the organization of its symbolic process are treated, taking into account expressive activity. The last part of the article explores the notion of aesthetics in applied theater, more specifically in the theater show. Taking the postmodern approach that aesthetics applies to the creation of an artifact and the reception of that artifact, the intervention of these elements in the theatrical system must be emphasized –that is, the analysis of the problems arising in the stages of the creation, presentation, and reception, by the public, of the theater performance. The aesthetic process is triggered involuntarily, simultaneously, or before the moment when people perceive the meaning of the messages transmitted by the work of art. The finding of this fact makes the mental process of aesthetics similar or related to that of semiotics. No matter how perceived individually, beauty, the mechanism of production can be reduced to two. The first step presents similarities to Peirce’s model, but the process between signified and signified additionally stimulates the related memory of the evaluation of beauty, adding to the meanings related to the signification itself. Then, the second step, a process of comparison, is followed, in which one examines whether the object being looked at matches the accumulated memory of beauty. Therefore, even though aesthetics is derived from the conceptual part, the judgment of beauty and, more than that, moral judgment come to be so important to the social activities of human beings that it evolves as a visible process independent of other conceptual contents.

Keywords: aesthetics, semiotics, symbolic composition, subjective joints, signifying, signified

Procedia PDF Downloads 80
607 Perception of Corporate Social Responsibility and Enhancing Compassion at Work through Sense of Meaningfulness

Authors: Nikeshala Weerasekara, Roshan Ajward

Abstract:

Contemporary business environment, given the circumstance of stringent scrutiny toward corporate behavior, organizations are under pressure to develop and implement solid overarching Corporate Social Responsibility (CSR) strategies. In that milieu, in order to differentiate themselves from competitors and maintain stakeholder confidence banks spend millions of dollars on CSR programmes. However, knowledge on how non-western bank employees perceive such activities is inconclusive. At the same time recently only researchers have shifted their focus on positive effects of compassion at work or the organizational conditions under which it arises. Nevertheless, mediation mechanisms between CSR and compassion at work have not been adequately examined leaving a vacuum to be explored. Despite finding a purpose in work that is greater than extrinsic outcomes of the work is important to employees, meaningful work has not been examined adequately. Thus, in addition to examining the direct relationship between CSR and compassion at work, this study examined the mediating capability of meaningful work between these variables. Specifically, the researcher explored how CSR enables employees to sense work as meaningful which in turn would enhance their level of compassion at work. Hypotheses were developed to examine the direct relationship between CSR and compassion at work and the mediating effect of meaningful work on the relationship between CSR and compassion at work. Both Social Identity Theory (SIT) and Social Exchange Theory (SET) were used to theoretically support the relationships. The sample comprised of 450 respondents covering different levels of the bank. A convenience sampling strategy was used to secure responses from 13 local licensed commercial banks in Sri Lanka. Data was collected using a structured questionnaire which was developed based on a comprehensive review of literature and refined using both expert opinions and a pilot survey. Structural equation modeling using Smart Partial Least Square (PLS) was utilized for data analysis. Findings indicate a positive and significant (p < .05) relationship between CSR and compassion at work. Also, it was found that meaningful work partially mediates the relationship between CSR and compassion at work. As per the findings it is concluded that bank employees’ perception of CSR engagement not only directly influence compassion at work but also impact such through meaningful work as well. This implies that employees consider working for a socially responsible bank since it creates greater meaningfulness of work to retain with the organization, which in turn trigger higher level of compassion at work. By utilizing both SIT and SET in explaining relationships between CSR and compassion at work it amounts to theoretical significance of the study. Enhance existing literature on CSR and compassion at work. Also, adds insights on mediating capability of psychologically related variables such as meaningful work. This study is expected to have significant policy implications in terms of increasing compassion at work where managers must understand the importance of including CSR activities into their strategy in order to thrive. Finally, it provides evidence of suitability of using Smart PLS to test models with mediating relationships involving non normal data.

Keywords: compassion at work, corporate social responsibility, employee commitment, meaningful work, positive affect

Procedia PDF Downloads 110
606 Antimicrobial and Antibiofilm Properties of Fatty Acids Against Streptococcus Mutans

Authors: A. Mulry, C. Kealey, D. B. Brady

Abstract:

Planktonic bacteria can form biofilms which are microbial aggregates embedded within a matrix of extracellular polymeric substances (EPS). They can be found attached to abiotic or biotic surfaces. Biofilms are responsible for oral diseases such as dental caries, gingivitis and the progression of periodontal disease. Biofilms can resist 500 to 1000 times the concentration of biocides and antibiotics used to kill planktonic bacteria. Biofilm development on oral surfaces involves four stages, initial attachment, early development, maturation and dispersal of planktonic cells. The Minimum Inhibitory Concentration (MIC) was determined using a range of saturated and unsaturated fatty acids using the resazurin assay, followed by serial dilution and spot plating on BHI agar plates to establish the Minimum Bactericidal Concentration (MBC). Log reduction of bacteria was also evaluated for each fatty acid. The Minimum Biofilm Inhibition Concentration (MBIC) was determined using crystal violet assay in 96 well plates on forming and pre-formed S. mutans biofilms using BHI supplemented with 1% sucrose. Saturated medium-chain fatty acids Octanoic (C8.0), Decanoic (C10.0) and Undecanoic acid (C11.0) do not display strong antibiofilm properties; however, Lauric (C12.0) and Myristic (C14.0) display moderate antibiofilm properties with 97.83% and 97.5% biofilm inhibition with 1000 µM respectively. Monounsaturated, Oleic acid (C18.1) and polyunsaturated large chain fatty acids, Linoleic acid (C18.2) display potent antibiofilm properties with biofilm inhibition of 99.73% at 125 µM and 100% at 65.5 µM, respectively. Long-chain polyunsaturated Omega-3 fatty acids α-Linoleic (C18.3), Eicosapentaenoic Acid (EPA) (C20.5), Docosahexaenoic Acid (DHA) (C22.6) have displayed strong antibiofilm efficacy from concentrations ranging from 31.25-250µg/ml. DHA is the most promising antibiofilm agent with an MBIC of 99.73% with 15.625µg/ml. This may be due to the presence of six double bonds and the structural orientation of the fatty acid. To conclude, fatty acids displaying the most antimicrobial activity appear to be medium or long-chain unsaturated fatty acids containing one or more double bonds. Most promising agents include Omega-3-fatty acids Linoleic, α-Linoleic, EPA and DHA, as well as Omega-9 fatty acid Oleic acid. These results indicate that fatty acids have the potential to be used as antimicrobials and antibiofilm agents against S. mutans. Future work involves further screening of the most potent fatty acids against a range of bacteria, including Gram-positive and Gram-negative oral pathogens. Future work will involve incorporating the most effective fatty acids onto dental implant devices to prevent biofilm formation.

Keywords: antibiofilm, biofilm, fatty acids, S. mutans

Procedia PDF Downloads 131
605 Quantification of the Non-Registered Electrical and Electronic Equipment for Domestic Consumption and Enhancing E-Waste Estimation: A Case Study on TVs in Vietnam

Authors: Ha Phuong Tran, Feng Wang, Jo Dewulf, Hai Trung Huynh, Thomas Schaubroeck

Abstract:

The fast increase and complex components have made waste of electrical and electronic equipment (or e-waste) one of the most problematic waste streams worldwide. Precise information on its size on national, regional and global level has therefore been highlighted as prerequisite to obtain a proper management system. However, this is a very challenging task, especially in developing countries where both formal e-waste management system and necessary statistical data for e-waste estimation, i.e. data on the production, sale and trade of electrical and electronic equipment (EEE), are often lacking. Moreover, there is an inflow of non-registered electronic and electric equipment, which ‘invisibly’ enters the EEE domestic market and then is used for domestic consumption. The non-registration/invisibility and (in most of the case) illicit nature of this flow make it difficult or even impossible to be captured in any statistical system. The e-waste generated from it is thus often uncounted in current e-waste estimation based on statistical market data. Therefore, this study focuses on enhancing e-waste estimation in developing countries and proposing a calculation pathway to quantify the magnitude of the non-registered EEE inflow. An advanced Input-Out Analysis model (i.e. the Sale–Stock–Lifespan model) has been integrated in the calculation procedure. In general, Sale-Stock-Lifespan model assists to improve the quality of input data for modeling (i.e. perform data consolidation to create more accurate lifespan profile, model dynamic lifespan to take into account its changes over time), via which the quality of e-waste estimation can be improved. To demonstrate the above objectives, a case study on televisions (TVs) in Vietnam has been employed. The results show that the amount of waste TVs in Vietnam has increased four times since 2000 till now. This upward trend is expected to continue in the future. In 2035, a total of 9.51 million TVs are predicted to be discarded. Moreover, estimation of non-registered TV inflow shows that it might on average contribute about 15% to the total TVs sold on the Vietnamese market during the whole period of 2002 to 2013. To tackle potential uncertainties associated with estimation models and input data, sensitivity analysis has been applied. The results show that both estimations of waste and non-registered inflow depend on two parameters i.e. number of TVs used in household and the lifespan. Particularly, with a 1% increase in the TV in-use rate, the average market share of non-register inflow in the period 2002-2013 increases 0.95%. However, it decreases from 27% to 15% when the constant unadjusted lifespan is replaced by the dynamic adjusted lifespan. The effect of these two parameters on the amount of waste TV generation for each year is more complex and non-linear over time. To conclude, despite of remaining uncertainty, this study is the first attempt to apply the Sale-Stock-Lifespan model to improve the e-waste estimation in developing countries and to quantify the non-registered EEE inflow to domestic consumption. It therefore can be further improved in future with more knowledge and data.

Keywords: e-waste, non-registered electrical and electronic equipment, TVs, Vietnam

Procedia PDF Downloads 233
604 Single-Case Experimental Design: Exploratory Pilot Study on the Feasibility and Effect of Virtual Reality for Pain and Anxiety Management During Care

Authors: Corbel Camille, Le Cerf Flora, Corveleyn Xavier

Abstract:

Introduction: Aging is a physiological phenomenon accompanied by anatomical and cognitive changes leading to anxiety and pain. This could have significant impacts on quality of life, life expectancy, and the progression of cognitive disorders. Virtual Reality Intervention (VRI) is increasingly recognized as a non-pharmacological approach to alleviate pain and anxiety in children and young adults. However, while recent studies have explored the feasibility of applying VRI in the older population, confirmation through studies is still required to establish its benefits in various contexts. Objective: This pilot study, following a clinical trial methodology international recommendation for VRI in healthcare, aims to evaluate the feasibility and effects of using VRI with a 101-year-old woman residing in a nursing home undergoing weekly painful and anxious wound dressing changes. Methods: Following the international recommendations, this study focused on feasibility and preliminary results. A Single Case Experimental Design protocol consists of two distinct phases: control (Phase A) and personalized VRI (Phase B), each lasting for 6 sessions. Data were collected before, during and after the care, using measures of pain (Algoplus and numerical scale), anxiety (Hospital anxiety scale and numerical scale), VRI experience (semi-structured interview) and physiological measures. Results: The results suggest that the utilization of VRI is both feasible and well-tolerated by the participant. VRI contributed to a decrease in pain and anxiety during care sessions, with a more significant impact on pain compared to anxiety, which showed a gradual and slight decrease. Physiological data, particularly those related to stress, also indicate a reduction in physiological activity during VRI. Conclusion: This pilot study confirms the feasibility and benefits of using virtual reality in managing pain and anxiety in an older adult in a nursing home. In light of these results, it is essential that future studies focus on setting up randomized controlled trials (RCTs). These studies should involve a representative number of older adults to ensure generalizable data. This rigorous, controlled methodology will enable us to assess the effectiveness of virtual reality more accurately in various care settings, measure its impact on clinical parameters such as pain and anxiety, and explore the long-term implications of this intervention.

Keywords: anxiety reduction, nursing home, older adult, pain management, virtual reality

Procedia PDF Downloads 42
603 A Content Analysis of the Introduction to the Philosophy of Religion Literature Published in the West between 1950-2010 in Terms of Definition, Method and Subjects

Authors: Fatih Topaloğlu

Abstract:

Although philosophy is inherently a theoretical and intellectual activity, it should not be denied that environmental conditions influence the formation and shaping of philosophical thought. In this context, it should be noted that the Philosophy of Religion has been influential in the debates in the West, especially since the beginning of the 20th century, and that this influence has dimensions that cannot be limited to academic or intellectual fields. The issues and problems that fall within the field of interest of Philosophy of Religion are followed with interest by a significant proportion of society through popular publications. Philosophy of Religion has its share in many social, economic, cultural, scientific, political and ethical developments. Philosophy of Religion, in the most general sense, can be defined as a philosophical approach to religion or a philosophical way of thinking and discussing religion. Philosophy of Religion tries to explain the epistemological foundations of concepts such as belief and faith that shape religious life by revealing their meaning for the individual. Thus, Philosophy of Religion tries to evaluate the effect of beliefs on the individual's values, judgments and behaviours with a comprehensive and critical eye. The Philosophy of Religion, which tries to create new solutions and perspectives by applying the methods of philosophy to religious problems, tries to solve these problems not by referring to the holy book or religious teachings but by logical proofs obtained through the possibilities of reason and evidence filtered through the filter of criticism. Although there is no standard method for doing Philosophy of Religion, it can be said that an approach that can be expressed as thinking about religion in a rational, objective, and consistent way is generally accepted. The evaluations made within the scope of Philosophy of Religion have two stages. The first is the definition stage, and the second is the evaluation stage. In the first stage, the data of different scientific disciplines, especially other religious sciences, are utilized to define the issues objectively. In the second stage, philosophical evaluations are made based on this foundation. During these evaluations, the issue of how the relationship between religion and philosophy should be established is extremely sensitive. The main thesis of this paper is that the Philosophy of Religion, as a branch of philosophy, has been affected by the conditions caused by the historical experience through which it has passed and has differentiated its subjects and the methods it uses to realize its philosophical acts over time under the influence of these conditions. This study will attempt to evaluate the validity of this study based on the "Introduction to Philosophy of Religion" literature, which we assume reflects this differentiation. As a result of this examination will aim to reach some factual conclusions about the nature of both philosophical and religious thought, to determine the phases that the Philosophy of Religion as a discipline has gone through since the day it emerged, and to investigate the possibilities of a holistic view of the field.

Keywords: content analysis, culture, history, philosophy of religion, method

Procedia PDF Downloads 39
602 Spectroscopic (Ir, Raman, Uv-Vis) and Biological Study of Copper and Zinc Complexes and Sodium Salt with Cichoric Acid

Authors: Renata Swislocka, Grzegorz Swiderski, Agata Jablonska-Trypuc, Wlodzimierz Lewandowski

Abstract:

Forming a complex of a phenolic compound with a metal not only alters the physicochemical properties of the ligand (including increase in stability or changes in lipophilicity), but also its biological activity, including antioxidant, antimicrobial and many others. As part of our previous projects, we examined the physicochemical and antimicrobial properties of phenolic acids and their complexes with metals naturally occurring in foods. Previously we studied the complexes of manganese(II), copper(II), cadmium(II) and alkali metals with ferulic, caffeic and p-coumaric acids. In the framework of this study, the physicochemical and biological properties of cicoric acid, its sodium salt, and complexes with copper and zinc were investigated. Cichoric acid is a derivative of both caffeic acid and tartaric acid. It has first been isolated from Cichorium intybus (chicory) but also it occurs in significant amounts in Echinacea, particularly E. purpurea, dandelion leaves, basil, lemon balm and in aquatic plants, including algae and sea grasses. For the study of spectroscopic and biological properties of cicoric acid, its sodium salt, and complexes with zinc and copper a variety of methods were used. Studies of antioxidant properties were carried out in relation to selected stable radicals (method of reduction of DPPH and reduction of FRAP). As a result, the structure and spectroscopic properties of cicoric acid and its complexes with selected metals in the solid state and in the solutions were defined. The IR and Raman spectra of cicoric acid displayed a number of bands that were derived from vibrations of caffeic and tartaric acids moieties. At 1746 and 1716 cm-1 the bands assigned to the vibrations of the carbonyl group of tartaric acid occurred. In the spectra of metal complexes with cichoric these bands disappeared what indicated that metal ion was coordinated by the carboxylic groups of tartaric acid. In the spectra of the sodium salt, a characteristic wide-band vibrations of carboxylate anion occurred. In the spectra of cicoric acid and its salt and complexes, a number of bands derived from the vibrations of the aromatic ring (caffeic acid) were assigned. Upon metal-ligand attachment, the changes in the values of the wavenumbers of these bands occurred. The impact of metals on the antioxidant properties of cicoric acid was also examined. Cichoric acid has a high antioxidant potential. Complexation by metals (zinc, copper) did not significantly affect its antioxidant capacity. The work was supported by the National Science Centre, Poland (grant no. 2015/17/B/NZ9/03581).

Keywords: chicoric acid, metal complexes, natural antioxidant, phenolic acids

Procedia PDF Downloads 321
601 Protective Effect of Ginger Root Extract on Dioxin-Induced Testicular Damage in Rats

Authors: Hamid Abdulroof Saleh

Abstract:

Background: Dioxins are one of the most widely distributed environmental pollutants. Dioxins consist of feedstock during the preparation of some industries, such as the paper industry as they can be produced in the atmosphere during the process of burning garbage and waste, especially medical waste. Dioxins can be found in the adipose tissues of animals in the food chain as well as in human breast milk. 2,3,7,8-Tetrachlorodibenzo-pdioxin (TCDD) is the most toxic component of a large group of dioxins. Humans are exposed to TCDD through contaminated food items like meat, fish, milk products, eggs etc. Recently, natural formulations relating to reducing or eliminating TCDD toxicity have been in focus. Ginger rhizome (Zingiber officinale R., family: Zingiberaceae), is used worldwide as a spice. Both antioxidative and androgenic activity of Z. officinale was reported in animal models. Researchers showed that ginger oil has dominative protective effect on DNA damage and might act as a scavenger of oxygen radical and might be used as an antioxidant. Aim of the work: The present study was undertaken to evaluate the toxic effect of TCDD on the structure and histoarchitecture of the testis and the protective role of co-administration of ginger root extract to prevent this toxicity. Materials & Methods: Male adult rats of Sprague-Dawley strain were assigned to four groups, eight rats in each; control group, dioxin treated group (given TCDD at the dose of 100 ng/kg Bwt/day by gavage), ginger treated group (given 50 mg/kg Bwt/day of ginger root extract by gavage), dioxin and ginger treated group (given TCDD at the dose of 100 ng/kg Bwt/day and 50 mg/kg Bwt/day of ginger root extract by gavages). After three weeks, rats were weighed and sacrificed where testis were removed and weighted. The testes were processed for routine paraffin embedding and staining. Tissue sections were examined for different morphometric and histopathological changes. Results: Dioxin administration showed a harmful effects in the body, testis weight and other morphometric parameters of the testis. In addition, it produced varying degrees of damage to the seminiferous tubules, which were shrunken and devoid of mature spermatids. The basement membrane was disorganized with vacuolization and loss of germinal cells. The co-administration of ginger root extract showed obvious improvement in the above changes and showed reversible morphometric and histopathological changes of the seminiferous tubules. Conclusion: Ginger root extract treatment in this study was successful in reversing all morphometric and histological changes of dioxin testicular damage. Therefore, it showed a protective effect on testis against dioxin toxicity.

Keywords: dioxin, ginger, rat, testis

Procedia PDF Downloads 403
600 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 60
599 Cai Guo-Qiang: A Chinese Artist at the Cutting-Edge of Global Art

Authors: Marta Blavia

Abstract:

Magiciens de la terre, organized in 1989 by the Centre Pompidou, became 'the first worldwide exhibition of contemporary art' by presenting artists from Western and non-Western countries, including three Chinese artists. For the first time, West turned its eyes to other countries not as exotic sources of inspiration, but as places where contemporary art was also being created. One year later, Chine: demain pour hier was inaugurated as the first Chinese avant-garde group-exhibition in Occident. Among the artists included was Cai Guo-Qiang who, like many other Chinese artists, had left his home country in the eighties in pursuit of greater creative freedom. By exploring artistic non-Western perspectives, both landmark exhibitions questioned the predominance of the Eurocentric vision in the construction of history art. But more than anything else, these exhibitions laid the groundwork for the rise of the so-called phenomenon 'global contemporary art'. All the same time, 1989 also was a turning point in Chinese art history. Because of the Tiananmen student protests, The Chinese government undertook a series of measures to cut down any kind of avant-garde artistic activity after a decade of a relative openness. During the eighties, and especially after the Tiananmen crackdown, some important artists began to leave China to move overseas such as Xu Bing and Ai Weiwei (USA); Chen Zhen and Huang Yong Ping (France); or Cai Guo-Qiang (Japan). After emigrating abroad, Chinese overseas artists began to develop projects in accordance with their new environments and audiences as well as to appear in numerous international exhibitions. With their creations, that moved freely between a variety of Eastern and Western art sources, these artists were crucial agents in the emergence of global contemporary art. As other Chinese artists overseas, Cai Guo-Qiang’s career took off during the 1990s and early 2000s right at the same moment in which Western art world started to look beyond itself. Little by little, he developed a very personal artistic language that redefines Chinese ideas, symbols, and traditional materials in a new world order marked by globalization. Cai Guo-Qiang participated in many of the exhibitions that contributed to shape global contemporary art: Encountering the Others (1992); the 45th Venice Biennale (1993); Inside Out: New Chinese Art (1997), or the 48th Venice Biennale (1999), where he recreated the Chinese monumental social realist work Rent Collection Courtyard that earned him the Golden Lion Award. By examining the different stages of Cai Guo-Qiang’s artistic path as well as the transnational dimensions of his creations, this paper aims at offering a comprehensive survey on the construction of the discourse of global contemporary art.

Keywords: Cai Guo-Qiang, Chinese artists overseas, emergence global art, transnational art

Procedia PDF Downloads 268
598 Valorization of Lignocellulosic Wastes– Evaluation of Its Toxicity When Used in Adsorption Systems

Authors: Isabel Brás, Artur Figueirinha, Bruno Esteves, Luísa P. Cruz-Lopes

Abstract:

The agriculture lignocellulosic by-products are receiving increased attention, namely in the search for filter materials that retain contaminants from water. These by-products, specifically almond and hazelnut shells are abundant in Portugal once almond and hazelnuts production is a local important activity. Hazelnut and almond shells have as main constituents lignin, cellulose and hemicelluloses, water soluble extractives and tannins. Along the adsorption of heavy metals from contaminated waters, water soluble compounds can leach from shells and have a negative impact in the environment. Usually, the chemical characterization of treated water by itself may not show environmental impact caused by the discharges when parameters obey to legal quality standards for water. Only biological systems can detect the toxic effects of the water constituents. Therefore, the evaluation of toxicity by biological tests is very important when deciding the suitability for safe water discharge or for irrigation applications. The main purpose of the present work was to assess the potential impacts of waters after been treated for heavy metal removal by hazelnut and almond shells adsorption systems, with short term acute toxicity tests. To conduct the study, water at pH 6 with 25 mg.L-1 of lead, was treated with 10 g of shell per litre of wastewater, for 24 hours. This procedure was followed for each bark. Afterwards the water was collected for toxicological assays; namely bacterial resistance, seed germination, Lemna minor L. test and plant grow. The effect in isolated bacteria strains was determined by disc diffusion method and the germination index of seed was evaluated using lettuce, with temperature and humidity germination control for 7 days. For aquatic higher organism, Lemnas were used with 4 days contact time with shell solutions, in controlled light and temperature. For terrestrial higher plants, biomass production was evaluated after 14 days of tomato germination had occurred in soil, with controlled humidity, light and temperature. Toxicity tests of water treated with shells revealed in some extent effects in the tested organisms, with the test assays showing a close behaviour as the control, leading to the conclusion that its further utilization may not be considered to create a serious risk to the environment.

Keywords: lignocellulosic wastes, adsorption, acute toxicity tests, risk assessment

Procedia PDF Downloads 354
597 Traditional Rainwater Harvesting Systems: A Sustainable Solution for Non-Urban Populations in the Mediterranean

Authors: S. Fares, K. Mellakh, A. Hmouri

Abstract:

The StorMer project aims to set up a network of researchers to study traditional hydraulic rainwater harvesting systems in the Mediterranean basin, a region suffering from the major impacts of climate change and limited natural water resources. The arid and semi-arid Mediterranean basin has a long history of pioneering water management practices. The region has developed various ancient traditional water management systems, such as cisterns and qanats, to sustainably manage water resources under historical conditions of scarcity. Therefore, the StorMer project brings together Spain, France, Italy, Greece, Jordan and Morocco to explore traditional rainwater harvesting practices and systems in the Mediterranean region and to develop accurate modeling to simulate the performance and sustainability of these technologies under present-day climatic conditions. The ultimate goal of this project was to resuscitate and valorize these practices in the context of contemporary challenges. This project was intended to establish a Mediterranean network to serve as a basis for a more ambitious project. The ultimate objective was to analyze traditional hydraulic systems and create a prototype hydraulic ecosystem using a coupled environmental approach and traditional and ancient know-how, with the aim of reinterpreting them in the light of current techniques. The combination of ‘traditional’ and ‘modern knowledge/techniques’ is expected to lead to proposals for innovative hydraulic systems. The pandemic initially slowed our progress, but in the end it forced us to carry out the fieldwork in Morocco and Saudi Arabia, and so restart the project. With the participation of colleagues from chronologically distant fields (archaeology, sociology), we are now prepared to share our observations and propose the next steps. This interdisciplinary approach should give us a global vision of the project's objectives and challenges. A diachronic approach is needed to tackle the question of the long-term adaptation of societies in a Mediterranean context that has experienced several periods of water stress. The next stage of the StorMer project is the implementation of pilots in non-urbanized regions. These pilots will test the implementation of traditional systems and will be maintained and evaluated in terms of effectiveness, cost and acceptance. Based on these experiences, larger projects will be proposed and could provide information for regional water management policies. One of the most important lessons learned from this project is the highly social nature of managing traditional rainwater harvesting systems. Unlike modern, centralized water infrastructures, these systems often require the involvement of communities, which assume ownership and responsibility for them. This kind of community engagement leads to greater maintenance and, therefore, sustainability of the systems. Knowledge of the socio-cultural characteristics of these communities means that the systems can be adapted to the needs of each location, ensuring greater acceptance and efficiency.

Keywords: oasis, rainfall harvesting, arid regions, Mediterranean

Procedia PDF Downloads 16
596 Assessing Moisture Adequacy over Semi-arid and Arid Indian Agricultural Farms using High-Resolution Thermography

Authors: Devansh Desai, Rahul Nigam

Abstract:

Crop water stress (W) at a given growth stage starts to set in as moisture availability (M) to roots falls below 75% of maximum. It has been found that ratio of crop evapotranspiration (ET) and reference evapotranspiration (ET0) is an indicator of moisture adequacy and is strongly correlated with ‘M’ and ‘W’. The spatial variability of ET0 is generally less over an agricultural farm of 1-5 ha than ET, which depends on both surface and atmospheric conditions, while the former depends only on atmospheric conditions. Solutions from surface energy balance (SEB) and thermal infrared (TIR) remote sensing are now known to estimate latent heat flux of ET. In the present study, ET and moisture adequacy index (MAI) (=ET/ET0) have been estimated over two contrasting western India agricultural farms having rice-wheat system in semi-arid climate and arid grassland system, limited by moisture availability. High-resolution multi-band TIR sensing observations at 65m from ECOSTRESS (ECOsystemSpaceborne Thermal Radiometer Experiment on Space Station) instrument on-board International Space Station (ISS) were used in an analytical SEB model, STIC (Surface Temperature Initiated Closure) to estimate ET and MAI. The ancillary variables used in the ET modeling and MAI estimation were land surface albedo, NDVI from close-by LANDSAT data at 30m spatial resolution, ET0 product at 4km spatial resolution from INSAT 3D, meteorological forcing variables from short-range weather forecast on air temperature and relative humidity from NWP model. Farm-scale ET estimates at 65m spatial resolution were found to show low RMSE of 16.6% to 17.5% with R2 >0.8 from 18 datasets as compared to reported errors (25 – 30%) from coarser-scale ET at 1 to 8 km spatial resolution when compared to in situ measurements from eddy covariance systems. The MAI was found to show lower (<0.25) and higher (>0.5) magnitudes in the contrasting agricultural farms. The study showed the potential need of high-resolution high-repeat spaceborne multi-band TIR payloads alongwith optical payload in estimating farm-scale ET and MAI for estimating consumptive water use and water stress. A set of future high-resolution multi-band TIR sensors are planned on-board Indo-French TRISHNA, ESA’s LSTM, NASA’s SBG space-borne missions to address sustainable irrigation water management at farm-scale to improve crop water productivity. These will provide precise and fundamental variables of surface energy balance such as LST (Land Surface Temperature), surface emissivity, albedo and NDVI. A synchronization among these missions is needed in terms of observations, algorithms, product definitions, calibration-validation experiments and downstream applications to maximize the potential benefits.

Keywords: thermal remote sensing, land surface temperature, crop water stress, evapotranspiration

Procedia PDF Downloads 53
595 A Research Study of the Inclusiveness of VR Headsets for Higher Education

Authors: Fredrick Forster, Gareth Ward, Matthew Tubby, Pamela Lithgow, Anne Nortcliffe

Abstract:

This paper presents the results from a research study of random adult participants accessing one of four different commercially available Virtual Reality (VR) Head Mounted Displays (HMDs) and completing a post user experience reflection questionnaire. The research sort to understand how inclusive commercially available VR HMDs are and identify any associated barriers that could impact the widespread adoption of the devices, specifically in Higher Education (HE). In the UK, education providers are legally required under the Equality Act 2010 to ensure all education facilities are inclusive and reasonable adjustments can be applied appropriately. The research specifically aimed to identify the considerations that academics and learning technologists need to make when adopting the use of commercial VR HMDs in HE classrooms, namely cybersickness, user comfort, Interpupillary Distance, inclusiveness, and user perceptions of VR. The research approach was designed to build upon previously published research on user reflections on presence, usability, and overall HMD comfort, using quantitative and qualitative research methods by way of a questionnaire. The quantitative data included the recording of physical characteristics such as the distance between eye pupils, known as Interpupillary Distance (IPD). VR HMDs require each user’s IPD measurement to enable the focusing of the VR HMDs virtual camera output to the right position in front of the eyes of the user. In addition, the questionnaire captured users’ qualitative reflections and evaluations of the broader accessibility characteristics of the VR HMDs. The initial research activity was accomplished by enabling a random sample of visitors, staff, and students at Canterbury Christ Church University, Kent to use a VR HMD for a set period of time and asking them to complete the post user experience questionnaire. The study identified that there is little correlation between users who experience cyber sickness and car sickness. Also, users with a smaller IPD than average (typically associated with females) were able to use the VR HMDs successfully; however, users with a larger than average IPD reported an impeded experience. This indicates that there is reduced inclusiveness for the tested VR HMDs for users with a higher-than-average IPD which is typically associated with males of certain ethnicities. As action education research, these initial findings will be used to refine the research method and conduct further investigations with the aim to provide verification and validation of the accessibility of current commercial VR HMDs. The conference presentation will report on the research results of the initial study and subsequent follow up studies with a larger variety of adult volunteers.

Keywords: virtual reality, education technology, inclusive technology, higher education

Procedia PDF Downloads 50
594 Image Segmentation with Deep Learning of Prostate Cancer Bone Metastases on Computed Tomography

Authors: Joseph M. Rich, Vinay A. Duddalwar, Assad A. Oberai

Abstract:

Prostate adenocarcinoma is the most common cancer in males, with osseous metastases as the commonest site of metastatic prostate carcinoma (mPC). Treatment monitoring is based on the evaluation and characterization of lesions on multiple imaging studies, including Computed Tomography (CT). Monitoring of the osseous disease burden, including follow-up of lesions and identification and characterization of new lesions, is a laborious task for radiologists. Deep learning algorithms are increasingly used to perform tasks such as identification and segmentation for osseous metastatic disease and provide accurate information regarding metastatic burden. Here, nnUNet was used to produce a model which can segment CT scan images of prostate adenocarcinoma vertebral bone metastatic lesions. nnUNet is an open-source Python package that adds optimizations to deep learning-based UNet architecture but has not been extensively combined with transfer learning techniques due to the absence of a readily available functionality of this method. The IRB-approved study data set includes imaging studies from patients with mPC who were enrolled in clinical trials at the University of Southern California (USC) Health Science Campus and Los Angeles County (LAC)/USC medical center. Manual segmentation of metastatic lesions was completed by an expert radiologist Dr. Vinay Duddalwar (20+ years in radiology and oncologic imaging), to serve as ground truths for the automated segmentation. Despite nnUNet’s success on some medical segmentation tasks, it only produced an average Dice Similarity Coefficient (DSC) of 0.31 on the USC dataset. DSC results fell in a bimodal distribution, with most scores falling either over 0.66 (reasonably accurate) or at 0 (no lesion detected). Applying more aggressive data augmentation techniques dropped the DSC to 0.15, and reducing the number of epochs reduced the DSC to below 0.1. Datasets have been identified for transfer learning, which involve balancing between size and similarity of the dataset. Identified datasets include the Pancreas data from the Medical Segmentation Decathlon, Pelvic Reference Data, and CT volumes with multiple organ segmentations (CT-ORG). Some of the challenges of producing an accurate model from the USC dataset include small dataset size (115 images), 2D data (as nnUNet generally performs better on 3D data), and the limited amount of public data capturing annotated CT images of bone lesions. Optimizations and improvements will be made by applying transfer learning and generative methods, including incorporating generative adversarial networks and diffusion models in order to augment the dataset. Performance with different libraries, including MONAI and custom architectures with Pytorch, will be compared. In the future, molecular correlations will be tracked with radiologic features for the purpose of multimodal composite biomarker identification. Once validated, these models will be incorporated into evaluation workflows to optimize radiologist evaluation. Our work demonstrates the challenges of applying automated image segmentation to small medical datasets and lays a foundation for techniques to improve performance. As machine learning models become increasingly incorporated into the workflow of radiologists, these findings will help improve the speed and accuracy of vertebral metastatic lesions detection.

Keywords: deep learning, image segmentation, medicine, nnUNet, prostate carcinoma, radiomics

Procedia PDF Downloads 76
593 Annexing the Strength of Information and Communication Technology (ICT) for Real-time TB Reporting Using TB Situation Room (TSR) in Nigeria: Kano State Experience

Authors: Ibrahim Umar, Ashiru Rajab, Sumayya Chindo, Emmanuel Olashore

Abstract:

INTRODUCTION: Kano is the most populous state in Nigeria and one of the two states with the highest TB burden in the country. The state notifies an average of 8,000+ TB cases quarterly and has the highest yearly notification of all the states in Nigeria from 2020 to 2022. The contribution of the state TB program to the National TB notification varies from 9% to 10% quarterly between the first quarter of 2022 and second quarter of 2023. The Kano State TB Situation Room is an innovative platform for timely data collection, collation and analysis for informed decision in health system. During the 2023 second National TB Testing week (NTBTW) Kano TB program aimed at early TB detection, prevention and treatment. The state TB Situation room provided avenue to the state for coordination and surveillance through real time data reporting, review, analysis and use during the NTBTW. OBJECTIVES: To assess the role of innovative information and communication technology platform for real-time TB reporting during second National TB Testing week in Nigeria 2023. To showcase the NTBTW data cascade analysis using TSR as innovative ICT platform. METHODOLOGY: The State TB deployed a real-time virtual dashboard for NTBTW reporting, analysis and feedback. A data room team was set up who received realtime data using google link. Data received was analyzed using power BI analytic tool with statistical alpha level of significance of <0.05. RESULTS: At the end of the week-long activity and using the real-time dashboard with onsite mentorship of the field workers, the state TB program was able to screen a total of 52,054 people were screened for TB from 72,112 individuals eligible for screening (72% screening rate). A total of 9,910 presumptive TB clients were identified and evaluated for TB leading to diagnosis of 445 TB patients with TB (5% yield from presumptives) and placement of 435 TB patients on treatment (98% percentage enrolment). CONCLUSION: The TB Situation Room (TBSR) has been a great asset to Kano State TB Control Program in meeting up with the growing demand for timely data reporting in TB and other global health responses. The use of real time surveillance data during the 2023 NTBTW has in no small measure improved the TB response and feedback in Kano State. Scaling up this intervention to other disease areas, states and nations is a positive step in the right direction towards global TB eradication.

Keywords: tuberculosis (tb), national tb testing week (ntbtw), tb situation rom (tsr), information communication technology (ict)

Procedia PDF Downloads 48
592 Multi-Criteria Evolutionary Algorithm to Develop Efficient Schedules for Complex Maintenance Problems

Authors: Sven Tackenberg, Sönke Duckwitz, Andreas Petz, Christopher M. Schlick

Abstract:

This paper introduces an extension to the well-established Resource-Constrained Project Scheduling Problem (RCPSP) to apply it to complex maintenance problems. The problem is to assign technicians to a team which has to process several tasks with multi-level skill requirements during a work shift. Here, several alternative activities for a task allow both, the temporal shift of activities or the reallocation of technicians and tools. As a result, switches from one valid work process variant to another can be considered and may be selected by the developed evolutionary algorithm based on the present skill level of technicians or the available tools. An additional complication of the observed scheduling problem is that the locations of the construction sites are only temporarily accessible during a day. Due to intensive rail traffic, the available time slots for maintenance and repair works are extremely short and are often distributed throughout the day. To identify efficient working periods, a first concept of a Bayesian network is introduced and is integrated into the extended RCPSP with pre-emptive and non-pre-emptive tasks. Thereby, the Bayesian network is used to calculate the probability of a maintenance task to be processed during a specific period of the shift. Focusing on the domain of maintenance of the railway infrastructure in metropolitan areas as the most unproductive implementation process at construction site, the paper illustrates how the extended RCPSP can be applied for maintenance planning support. A multi-criteria evolutionary algorithm with a problem representation is introduced which is capable of revising technician-task allocations, whereas the duration of the task may be stochastic. The approach uses a novel activity list representation to ensure easily describable and modifiable elements which can be converted into detailed shift schedules. Thereby, the main objective is to develop a shift plan which maximizes the utilization of each technician due to a minimization of the waiting times caused by rail traffic. The results of the already implemented core algorithm illustrate a fast convergence towards an optimal team composition for a shift, an efficient sequence of tasks and a high probability of the subsequent implementation due to the stochastic durations of the tasks. In the paper, the algorithm for the extended RCPSP is analyzed in experimental evaluation using real-world example problems with various size, resource complexity, tightness and so forth.

Keywords: maintenance management, scheduling, resource constrained project scheduling problem, genetic algorithms

Procedia PDF Downloads 215
591 Determination of the Relative Humidity Profiles in an Internal Micro-Climate Conditioned Using Evaporative Cooling

Authors: M. Bonello, D. Micallef, S. P. Borg

Abstract:

Driven by increased comfort standards, but at the same time high energy consciousness, energy-efficient space cooling has become an essential aspect of building design. Its aims are simple, aiming at providing satisfactory thermal comfort for individuals in an interior space using low energy consumption cooling systems. In this context, evaporative cooling is both an energy-efficient and an eco-friendly cooling process. In the past two decades, several academic studies have been performed to determine the resulting thermal comfort produced by an evaporative cooling system, including studies on temperature profiles, air speed profiles, effect of clothing and personnel activity. To the best knowledge of the authors, no studies have yet considered the analysis of relative humidity (RH) profiles in a space cooled using evaporative cooling. Such a study will determine the effect of different humidity levels on a person's thermal comfort and aid in the consequent improvement designs of such future systems. Under this premise, the research objective is to characterise the resulting different RH profiles in a chamber micro-climate using the evaporative cooling system in which the inlet air speed, temperature and humidity content are varied. The chamber shall be modelled using Computational Fluid Dynamics (CFD) in ANSYS Fluent. Relative humidity shall be modelled using a species transport model while the k-ε RNG formulation is the proposed turbulence model that is to be used. The model shall be validated with measurements taken using an identical test chamber in which tests are to be conducted under the different inlet conditions mentioned above, followed by the verification of the model's mesh and time step. The verified and validated model will then be used to simulate other inlet conditions which would be impractical to conduct in the actual chamber. More details of the modelling and experimental approach will be provided in the full paper The main conclusions from this work are two-fold: the micro-climatic relative humidity spatial distribution within the room is important to consider in the context of investigating comfort at occupant level; and the investigation of a human being's thermal comfort (based on Predicted Mean Vote – Predicted Percentage Dissatisfied [PMV-PPD] values) and its variation with different locations of relative humidity values. The study provides the necessary groundwork for investigating the micro-climatic RH conditions of environments cooled using evaporative cooling. Future work may also target the analysis of ways in which evaporative cooling systems may be improved to better the thermal comfort of human beings, specifically relating to the humidity content around a sedentary person.

Keywords: chamber micro-climate, evaporative cooling, relative humidity, thermal comfort

Procedia PDF Downloads 139
590 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 253
589 Seismic Behavior of Existing Reinforced Concrete Buildings in California under Mainshock-Aftershock Scenarios

Authors: Ahmed Mantawy, James C. Anderson

Abstract:

Numerous cases of earthquakes (main-shocks) that were followed by aftershocks have been recorded in California. In 1992 a pair of strong earthquakes occurred within three hours of each other in Southern California. The first shock occurred near the community of Landers and was assigned a magnitude of 7.3 then the second shock occurred near the city of Big Bear about 20 miles west of the initial shock and was assigned a magnitude of 6.2. In the same year, a series of three earthquakes occurred over two days in the Cape-Mendocino area of Northern California. The main-shock was assigned a magnitude of 7.0 while the second and the third shocks were both assigned a value of 6.6. This paper investigates the effect of a main-shock accompanied with aftershocks of significant intensity on reinforced concrete (RC) frame buildings to indicate nonlinear behavior using PERFORM-3D software. A 6-story building in San Bruno and a 20-story building in North Hollywood were selected for the study as both of them have RC moment resisting frame systems. The buildings are also instrumented at multiple floor levels as a part of the California Strong Motion Instrumentation Program (CSMIP). Both buildings have recorded responses during past events such as Loma-Prieta and Northridge earthquakes which were used in verifying the response parameters of the numerical models in PERFORM-3D. The verification of the numerical models shows good agreement between the calculated and the recorded response values. Then, different scenarios of a main-shock followed by a series of aftershocks from real cases in California were applied to the building models in order to investigate the structural behavior of the moment-resisting frame system. The behavior was evaluated in terms of the lateral floor displacements, the ductility demands, and the inelastic behavior at critical locations. The analysis results showed that permanent displacements may have happened due to the plastic deformation during the main-shock that can lead to higher displacements during after-shocks. Also, the inelastic response at plastic hinges during the main-shock can change the hysteretic behavior during the aftershocks. Higher ductility demands can also occur when buildings are subjected to trains of ground motions compared to the case of individual ground motions. A general conclusion is that the occurrence of aftershocks following an earthquake can lead to increased damage within the elements of an RC frame buildings. Current code provisions for seismic design do not consider the probability of significant aftershocks when designing a new building in zones of high seismic activity.

Keywords: reinforced concrete, existing buildings, aftershocks, damage accumulation

Procedia PDF Downloads 269
588 Polymeric Composites with Synergetic Carbon and Layered Metallic Compounds for Supercapacitor Application

Authors: Anukul K. Thakur, Ram Bilash Choudhary, Mandira Majumder

Abstract:

In this technologically driven world, it is requisite to develop better, faster and smaller electronic devices for various applications to keep pace with fast developing modern life. In addition, it is also required to develop sustainable and clean sources of energy in this era where the environment is being threatened by pollution and its severe consequences. Supercapacitor has gained tremendous attention in the recent years because of its various attractive properties such as it is essentially maintenance-free, high specific power, high power density, excellent pulse charge/discharge characteristics, exhibiting a long cycle-life, require a very simple charging circuit and safe operation. Binary and ternary composites of conducting polymers with carbon and other layered transition metal dichalcogenides have shown tremendous progress in the last few decades. Compared with bulk conducting polymer, these days conducting polymers have gained more attention because of their high electrical conductivity, large surface area, short length for the ion transport and superior electrochemical activity. These properties make them very suitable for several energy storage applications. On the other hand, carbon materials have also been studied intensively, owing to its rich specific surface area, very light weight, excellent chemical-mechanical property and a wide range of the operating temperature. These have been extensively employed in the fabrication of carbon-based energy storage devices and also as an electrode material in supercapacitors. Incorporation of carbon materials into the polymers increases the electrical conductivity of the polymeric composite so formed due to high electrical conductivity, high surface area and interconnectivity of the carbon. Further, polymeric composites based on layered transition metal dichalcogenides such as molybdenum disulfide (MoS2) are also considered important because they are thin indirect band gap semiconductors with a band gap around 1.2 to 1.9eV. Amongst the various 2D materials, MoS2 has received much attention because of its unique structure consisting of a graphene-like hexagonal arrangement of Mo and S atoms stacked layer by layer to give S-Mo-S sandwiches with weak Van-der-Waal forces between them. It shows higher intrinsic fast ionic conductivity than oxides and higher theoretical capacitance than the graphite.

Keywords: supercapacitor, layered transition-metal dichalcogenide, conducting polymer, ternary, carbon

Procedia PDF Downloads 234
587 Multiscale Modelization of Multilayered Bi-Dimensional Soils

Authors: I. Hosni, L. Bennaceur Farah, N. Saber, R Bennaceur

Abstract:

Soil moisture content is a key variable in many environmental sciences. Even though it represents a small proportion of the liquid freshwater on Earth, it modulates interactions between the land surface and the atmosphere, thereby influencing climate and weather. Accurate modeling of the above processes depends on the ability to provide a proper spatial characterization of soil moisture. The measurement of soil moisture content allows assessment of soil water resources in the field of hydrology and agronomy. The second parameter in interaction with the radar signal is the geometric structure of the soil. Most traditional electromagnetic models consider natural surfaces as single scale zero mean stationary Gaussian random processes. Roughness behavior is characterized by statistical parameters like the Root Mean Square (RMS) height and the correlation length. Then, the main problem is that the agreement between experimental measurements and theoretical values is usually poor due to the large variability of the correlation function, and as a consequence, backscattering models have often failed to predict correctly backscattering. In this study, surfaces are considered as band-limited fractal random processes corresponding to a superposition of a finite number of one-dimensional Gaussian process each one having a spatial scale. Multiscale roughness is characterized by two parameters, the first one is proportional to the RMS height, and the other one is related to the fractal dimension. Soil moisture is related to the complex dielectric constant. This multiscale description has been adapted to two-dimensional profiles using the bi-dimensional wavelet transform and the Mallat algorithm to describe more correctly natural surfaces. We characterize the soil surfaces and sub-surfaces by a three layers geo-electrical model. The upper layer is described by its dielectric constant, thickness, a multiscale bi-dimensional surface roughness model by using the wavelet transform and the Mallat algorithm, and volume scattering parameters. The lower layer is divided into three fictive layers separated by an assumed plane interface. These three layers were modeled by an effective medium characterized by an apparent effective dielectric constant taking into account the presence of air pockets in the soil. We have adopted the 2D multiscale three layers small perturbations model including, firstly air pockets in the soil sub-structure, and then a vegetable canopy in the soil surface structure, that is to simulate the radar backscattering. A sensitivity analysis of backscattering coefficient dependence on multiscale roughness and new soil moisture has been performed. Later, we proposed to change the dielectric constant of the multilayer medium because it takes into account the different moisture values of each layer in the soil. A sensitivity analysis of the backscattering coefficient, including the air pockets in the volume structure with respect to the multiscale roughness parameters and the apparent dielectric constant, was carried out. Finally, we proposed to study the behavior of the backscattering coefficient of the radar on a soil having a vegetable layer in its surface structure.

Keywords: multiscale, bidimensional, wavelets, backscattering, multilayer, SPM, air pockets

Procedia PDF Downloads 106
586 Performance Evaluation of Various Displaced Left Turn Intersection Designs

Authors: Hatem Abou-Senna, Essam Radwan

Abstract:

With increasing traffic and limited resources, accommodating left-turning traffic has been a challenge for traffic engineers as they seek balance between intersection capacity and safety; these are two conflicting goals in the operation of a signalized intersection that are mitigated through signal phasing techniques. Hence, to increase the left-turn capacity and reduce the delay at the intersections, the Florida Department of Transportation (FDOT) moves forward with a vision of optimizing intersection control using innovative intersection designs through the Transportation Systems Management & Operations (TSM&O) program. These alternative designs successfully eliminate the left-turn phase, which otherwise reduces the conventional intersection’s (CI) efficiency considerably, and divide the intersection into smaller networks that would operate in a one-way fashion. This study focused on the Crossover Displaced Left-turn intersections (XDL), also known as Continuous Flow Intersections (CFI). The XDL concept is best suited for intersections with moderate to high overall traffic volumes, especially those with very high or unbalanced left turn volumes. There is little guidance on determining whether partial XDL intersections are adequate to mitigate the overall intersection condition or full XDL is always required. The primary objective of this paper was to evaluate the overall intersection performance in the case of different partial XDL designs compared to a full XDL. The XDL alternative was investigated for 4 different scenarios; partial XDL on the east-west approaches, partial XDL on the north-south approaches, partial XDL on the north and east approaches and full XDL on all 4 approaches. Also, the impact of increasing volume on the intersection performance was considered by modeling the unbalanced volumes with 10% increment resulting in 5 different traffic scenarios. The study intersection, located in Orlando Florida, is experiencing recurring congestion in the PM peak hour and is operating near capacity with volume to a capacity ratio closer to 1.00 due to the presence of two heavy conflicting movements; southbound and westbound. The results showed that a partial EN XDL alternative proved to be effective and compared favorably to a full XDL alternative followed by the partial EW XDL alternative. The analysis also showed that Full, EW and EN XDL alternatives outperformed the NS XDL and the CI alternatives with respect to the throughput, delay and queue lengths. Significant throughput improvements were remarkable at the higher volume level with percent increase in capacity of 25%. The percent reduction in delay for the critical movements in the XDL scenarios compared to the CI scenario ranged from 30-45%. Similarly, queue lengths showed percent reduction in the XDL scenarios ranging from 25-40%. The analysis revealed how partial XDL design can improve the overall intersection performance at various demands, reduce the costs associated with full XDL and proved to outperform the conventional intersection. However, partial XDL serving low volumes or only one of the critical movements while other critical movements are operating near or above capacity do not provide significant benefits when compared to the conventional intersection.

Keywords: continuous flow intersections, crossover displaced left-turn, microscopic traffic simulation, transportation system management and operations, VISSIM simulation model

Procedia PDF Downloads 293
585 Characterizing and Developing the Clinical Grade Microbiome Assay with a Robust Bioinformatics Pipeline for Supporting Precision Medicine Driven Clinical Development

Authors: Danyi Wang, Andrew Schriefer, Dennis O'Rourke, Brajendra Kumar, Yang Liu, Fei Zhong, Juergen Scheuenpflug, Zheng Feng

Abstract:

Purpose: It has been recognized that the microbiome plays critical roles in disease pathogenesis, including cancer, autoimmune disease, and multiple sclerosis. To develop a clinical-grade assay for exploring microbiome-derived clinical biomarkers across disease areas, a two-phase approach is implemented. 1) Identification of the optimal sample preparation reagents using pre-mixed bacteria and healthy donor stool samples coupled with proprietary Sigma-Aldrich® bioinformatics solution. 2) Exploratory analysis of patient samples for enabling precision medicine. Study Procedure: In phase 1 study, we first compared the 16S sequencing results of two ATCC® microbiome standards (MSA 2002 and MSA 2003) across five different extraction kits (Kit A, B, C, D & E). Both microbiome standards samples were extracted in triplicate across all extraction kits. Following isolation, DNA quantity was determined by Qubit assay. DNA quality was assessed to determine purity and to confirm extracted DNA is of high molecular weight. Bacterial 16S ribosomal ribonucleic acid (rRNA) amplicons were generated via amplification of the V3/V4 hypervariable region of the 16S rRNA. Sequencing was performed using a 2x300 bp paired-end configuration on the Illumina MiSeq. Fastq files were analyzed using the Sigma-Aldrich® Microbiome Platform. The Microbiome Platform is a cloud-based service that offers best-in-class 16S-seq and WGS analysis pipelines and databases. The Platform and its methods have been extensively benchmarked using microbiome standards generated internally by MilliporeSigma and other external providers. Data Summary: The DNA yield using the extraction kit D and E is below the limit of detection (100 pg/µl) of Qubit assay as both extraction kits are intended for samples with low bacterial counts. The pre-mixed bacterial pellets at high concentrations with an input of 2 x106 cells for MSA-2002 and 1 x106 cells from MSA-2003 were not compatible with the kits. Among the remaining 3 extraction kits, kit A produced the greatest yield whereas kit B provided the least yield (Kit-A/MSA-2002: 174.25 ± 34.98; Kit-A/MSA-2003: 179.89 ± 30.18; Kit-B/MSA-2002: 27.86 ± 9.35; Kit-B/MSA-2003: 23.14 ± 6.39; Kit-C/MSA-2002: 55.19 ± 10.18; Kit-C/MSA-2003: 35.80 ± 11.41 (Mean ± SD)). Also, kit A produced the greatest yield, whereas kit B provided the least yield. The PCoA 3D visualization of the Weighted Unifrac beta diversity shows that kits A and C cluster closely together while kit B appears as an outlier. The kit A sequencing samples cluster more closely together than both the other kits. The taxonomic profiles of kit B have lower recall when compared to the known mixture profiles indicating that kit B was inefficient at detecting some of the bacteria. Conclusion: Our data demonstrated that the DNA extraction method impacts DNA concentration, purity, and microbial communities detected by next-generation sequencing analysis. Further microbiome analysis performance comparison of using healthy stool samples is underway; also, colorectal cancer patients' samples will be acquired for further explore the clinical utilities. Collectively, our comprehensive qualification approach, including the evaluation of optimal DNA extraction conditions, the inclusion of positive controls, and the implementation of a robust qualified bioinformatics pipeline, assures accurate characterization of the microbiota in a complex matrix for deciphering the deep biology and enabling precision medicine.

Keywords: 16S rRNA sequencing, analytical validation, bioinformatics pipeline, metagenomics

Procedia PDF Downloads 142
584 Early Influences on Teacher Identity: Perspectives from the USA and Northern Ireland

Authors: Martin Hagan

Abstract:

Teacher identity has been recognised as a crucial field of research which supports understanding of the ways in which teachers navigate the complexities of professional life in order to grow in competence, knowledge and practice. As a field of study, teacher identity is concerned with understanding: how identity is defined; how it develops; how teachers make sense of their emerging identity; and how the act of teaching is mediated through the individual teacher’s values, beliefs and sense of professional self. By comparing two particular, socially constructed learning contexts or ‘learning milieu’, one in Northern Ireland and the other in the United States of America, this study aims specifically, to gain better understanding of how teacher identity develops during the initial phase of teacher education. The comparative approach was adopted on the premise that experiences are constructed through interactive, socio-historical and cultural negotiations with others within particular environments, situations and contexts. As such, whilst the common goal is to ‘become’ a teacher, the nuances emerging from the different learning milieu highlight variance in discourse, priorities, practice and influence. A qualitative, interpretative research design was employed to understand the world-constructions of the participants through asking open-ended questions, seeking views and perspectives, examining contexts and eventually deducing meaning. Data were collected using semi structured interviews from a purposive sample of student teachers (n14) in either the first or second year of study in their respective institutions. In addition, a sample of teacher educators (n5) responsible for the design, organisation and management of the programmes were also interviewed. Inductive thematic analysis was then conducted, which highlighted issues related to: the participants’ personal dispositions, prior learning experiences and motivation; the influence of the teacher education programme on the participants’ emerging professional identity; and the extent to which the experiences of working with teachers and pupils in schools in the context of the practicum, challenged and changed perspectives on teaching as a professional activity. The study also highlights the varying degrees of influence exercised by the different roles (tutor, host teacher/mentor, student) within the teacher-learning process across the two contexts. The findings of the study contribute to the understanding of teacher identity development in the early stages of professional learning. By so doing, the research makes a valid contribution to the discourse on initial teacher preparation and can help to better inform teacher educators and policy makers in relation to appropriate strategies, approaches and programmes to support professional learning and positive teacher identity formation.

Keywords: initial teacher education, professional learning, professional growth, teacher identity

Procedia PDF Downloads 55
583 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 178
582 My Perfect Partner: Creative Methods in Relationship Education

Authors: Janette Porter, Kay Standing

Abstract:

The paper presents our experiences of working in both mainstream and Special Education Needs and Disabilities (SEND) schools in England from 2012-2019, using creative methodologies to deliver and evaluate healthy relationship education. It aims to explore to explore how young people's perceptions of relationships and their "perfect partner" are mediated by factors such as gender, body image, and social media. It will be an interactive session, inviting participants to reflect on their own experiences of relationship education, and to take part in an example of a classroom activity of 'a perfect partner'. Young people aged 16-25 are most at risk of relationship abuse and intimate partner violence. This can be enacted both on the body, through physical and sexual violence, but also emotional and psychological abuse. In England and Wales relationship education became compulsory in schools in September 2020. There is increasing recognition for the need for whole school approaches to prevent gender-based violence, in particular domestic abuse, from happening in the first place and for equipping schools to feel more confident supporting young people affected by gender-based violence. The project used creative methods, including arts, drama, music, poetry, song, and creative writing, to engage participants in sensitive topics related to relationship education. Interactive workshops with pupils aged 11-19 enabled young people to express themselves freely, pupils then used drama to share their knowledge with their peer group. We co-produced material with young people, including an accessible resource pack for use in SEND schools, particularly for children with visual and sensory impairments. The project was evaluated by questionnaires and interviews with pupils. The paper also reflects on the ethical issues involved in the research. After the project, young people had a better understanding of healthy and unhealthy relationships, improved knowledge of the early warning signs of abuse and knew where to go to for help and advice. It found that creative methods are an effective way to engage young people in relationship education and sensitive topics. We argue that age and ability appropriate relationship education should be compulsory across the curriculum and that implementing creative and art-based approaches to address sensitive topics can enhance the effectiveness of relationship education programs in promoting healthy relationships and preventing abuse. The paper provides academic and practitioner perspectives, providing a reflection on our research, looking at practical, methodological, and ethical issues involved in research on Gender Based Violence with young people in a school setting.

Keywords: relationship education, healthy relationships, creative methods, young people

Procedia PDF Downloads 43
581 Experiment-Based Teaching Method for the Varying Frictional Coefficient

Authors: Mihaly Homostrei, Tamas Simon, Dorottya Schnider

Abstract:

The topic of oscillation in physics is one of the key ideas which is usually taught based on the concept of harmonic oscillation. It can be an interesting activity to deal with a frictional oscillator in advanced high school classes or in university courses. Its mechanics are investigated in this research, which shows that the motion of the frictional oscillator is more complicated than a simple harmonic oscillator. The physics of the applied model in this study seems to be interesting and useful for undergraduate students. The study presents a well-known physical system, which is mostly discussed theoretically in high school and at the university. The ideal frictional oscillator is normally used as an example of harmonic oscillatory motion, as its theory relies on the constant coefficient of sliding friction. The structure of the system is simple: a rod with a homogeneous mass distribution is placed on two rotating identical cylinders placed at the same height so that they are horizontally aligned, and they rotate at the same angular velocity, however in opposite directions. Based on this setup, one could easily show that the equation of motion describes a harmonic oscillation considering the magnitudes of the normal forces in the system as the function of the position and the frictional forces with a constant coefficient of frictions are related to them. Therefore, the whole description of the model relies on simple Newtonian mechanics, which is available for students even in high school. On the other hand, the phenomenon of the described frictional oscillator does not seem to be so straightforward after all; experiments show that the simple harmonic oscillation cannot be observed in all cases, and the system performs a much more complex movement, whereby the rod adjusts itself to a non-harmonic oscillation with a nonzero stable amplitude after an unconventional damping effect. The stable amplitude, in this case, means that the position function of the rod converges to a harmonic oscillation with a constant amplitude. This leads to the idea of a more complex model which can describe the motion of the rod in a more accurate way. The main difference to the original equation of motion is the concept that the frictional coefficient varies with the relative velocity. This dependence on the velocity was investigated in many different research articles as well; however, this specific problem could demonstrate the key concept of the varying friction coefficient and its importance in an interesting and demonstrative way. The position function of the rod is described by a more complicated and non-trivial, yet more precise equation than the usual harmonic oscillation description of the movement. The study discusses the structure of the measurements related to the frictional oscillator, the qualitative and quantitative derivation of the theory, and the comparison of the final theoretical function as well as the measured position-function in time. The project provides useful materials and knowledge for undergraduate students and a new perspective in university physics education.

Keywords: friction, frictional coefficient, non-harmonic oscillator, physics education

Procedia PDF Downloads 178