Search results for: high pressure delignification
2358 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning
Authors: Umamaheswari Shanmugam, Silvia Ronchi, Radu Vornicu
Abstract:
Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that are able to use the large amount and variety of data generated during healthcare services every day. As we read the news, over 500 machine learning or other artificial intelligence medical devices have now received FDA clearance or approval, the first ones even preceding the year 2000. One of the big advantages of these new technologies is the ability to get experience and knowledge from real-world use and to continuously improve their performance. Healthcare systems and institutions can have a great benefit because the use of advanced technologies improves the same time efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and also to protect patients’ safety. The evolution and the continuous improvement of software used in healthcare must take into consideration the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device approval, but they are necessary to ensure performance, quality, and safety, and at the same time, they can be a business opportunity if the manufacturer is able to define in advance the appropriate regulatory strategy. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems.
Procedia PDF Downloads 902357 Hydrothermal Aging Behavior of Continuous Carbon Fiber Reinforced Polyamide 6 Composites
Authors: Jifeng Zhang , Yongpeng Lei
Abstract:
Continuous carbon fiber reinforced polyamide 6 (CF/PA6) composites are potential for application in the automotive industry due to their high specific strength and stiffness. However, PA6 resin is sensitive to the moisture in the hydrothermal environment and CF/PA6 composites might undergo several physical and chemical changes, such as plasticization, swelling, and hydrolysis, which induces a reduction of mechanical properties. So far, little research has been reported on the assessment of the effects of hydrothermal aging on the mechanical properties of continuous CF/PA6 composite. This study deals with the effects of hydrothermal aging on moisture absorption and mechanical properties of polyamide 6 (PA6) and polyamide 6 reinforced with continuous carbon fibers composites (CF/PA6) by immersion in distilled water at 30 ℃, 50 ℃, 70 ℃, and 90 ℃. Degradation of mechanical performance has been monitored, depending on the water absorption content and the aging temperature. The experimental results reveal that under the same aging condition, the PA6 resin absorbs more water than the CF/PA6 composite, while the water diffusion coefficient of CF/PA6 composite is higher than that of PA6 resin because of interfacial diffusion channel. In mechanical properties degradation process, an exponential reduction in tensile strength and elastic modulus are observed in PA6 resin as aging temperature and water absorption content increases. The degradation trend of flexural properties of CF/PA6 is the same as that of tensile properties of PA6 resin. Moreover, the water content plays a decisive role in mechanical degradation compared with aging temperature. In contrast, hydrothermal environment has mild effect on the tensile properties of CF/PA6 composites. The elongation at breakage of PA6 resin and CF/PA6 reaches the highest value when their water content reaches 6% and 4%, respectively. Dynamic mechanical analysis (DMA) and scanning electron microscope (SEM) were also used to explain the mechanism of mechanical properties alteration. After exposed to the hydrothermal environment, the Tg (glass transition temperature) of samples decreases dramatically with water content increase. This reduction can be ascribed to the plasticization effect of water. For the unaged specimens, the fibers surface is coated with resin and the main fracture mode is fiber breakage, indicating that a good adhesion between fiber and matrix. However, with absorbed water content increasing, the fracture mode transforms to fiber pullout. Finally, based on Arrhenius methodology, a predictive model with relate to the temperature and water content has been presented to estimate the retention of mechanical properties for PA6 and CF/PA6.Keywords: continuous carbon fiber reinforced polyamide 6 composite, hydrothermal aging, Arrhenius methodology, interface
Procedia PDF Downloads 1212356 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational
Authors: Marco Sewald
Abstract:
Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating
Procedia PDF Downloads 2212355 Numerical Board Game for Low-Income Preschoolers
Authors: Gozde Inal Kiziltepe, Ozgun Uyanik
Abstract:
There is growing evidence that socioeconomic (SES)-related differences in mathematical knowledge primarily start in early childhood period. Preschoolers from low-income families are likely to perform substantially worse in mathematical knowledge than their counterparts from middle and higher income families. The differences are seen on a wide range of recognizing written numerals, counting, adding and subtracting, and comparing numerical magnitudes. Early differences in numerical knowledge have a permanent effect childrens’ mathematical knowledge in other grades. In this respect, analyzing the effect of number board game on the number knowledge of 48-60 month-old children from disadvantaged low-income families constitutes the main objective of the study. Participants were the 71 preschoolers from a childcare center which served low-income urban families. Children were randomly assigned to the number board condition or to the color board condition. The number board condition included 35 children and the color board game condition included 36 children. Both board games were 50 cm long and 30 cm high; had ‘The Great Race’ written across the top; and included 11 horizontally arranged, different colored squares of equal sizes with the leftmost square labeled ‘Start’. The numerical board had the numbers 1–10 in the rightmost 10 squares; the color board had different colors in those squares. A rabbit or a bear token were presented to children for selecting, and on each trial spun a spinner to determine whether the token would move one or two spaces. The number condition spinner had a ‘1’ half and a ‘2’ half; the color condition spinner had colors that matched the colors of the squares on the board. Children met one-on-one with an experimenter for four 15- to 20-min sessions within a 2-week period. In the first and fourth sessions, children were administered identical pretest and posttest measures of numerical knowledge. All children were presented three numerical tasks and one subtest presented in the following order: counting, numerical magnitude comparison, numerical identification and Count Objects – Circle Number Probe subtest of Early Numeracy Assessment. In addition, same numerical tasks and subtest were given as a follow-up test four weeks after the post-test administration. Findings obtained from the study; showed that there was a meaningful difference between scores of children who played a color board game in favor of children who played number board game.Keywords: low income, numerical board game, numerical knowledge, preschool education
Procedia PDF Downloads 3532354 Self-Assembled ZnFeAl Layered Double Hydroxides as Highly Efficient Fenton-Like Catalysts
Authors: Marius Sebastian Secula, Mihaela Darie, Gabriela Carja
Abstract:
Ibuprofen is a non-steroidal anti-inflammatory drug (NSAIDs) and is among the most frequently detected pharmaceuticals in environmental samples and among the most widespread drug in the world. Its concentration in the environment is reported to be between 10 and 160 ng L-1. In order to improve the abatement efficiency of this compound for water source prevention and reclamation, the development of innovative technologies is mandatory. AOPs (advanced oxidation processes) are known as highly efficient towards the oxidation of organic pollutants. Among the promising combined treatments, photo-Fenton processes using layered double hydroxides (LDHs) attracted significant consideration especially due to their composition flexibility, high surface area and tailored redox features. This work presents the self-supported Fe, Mn or Ti on ZnFeAl LDHs obtained by co-precipitation followed by reconstruction method as novel efficient photo-catalysts for Fenton-like catalysis. Fe, Mn or Ti/ZnFeAl LDHs nano-hybrids were tested for the degradation of a model pharmaceutical agent, the anti-inflammatory agent ibuprofen, by photocatalysis and photo-Fenton catalysis, respectively, by means of a lab-scale system consisting of a batch reactor equipped with an UV lamp (17 W). The present study presents comparatively the degradation of Ibuprofen in aqueous solution UV light irradiation using four different types of LDHs. The newly prepared Ti/ZnFeAl 4:1 catalyst results in the best degradation performance. After 60 minutes of light irradiation, the Ibuprofen removal efficiency reaches 95%. The slowest degradation of Ibuprofen solution occurs in case of Fe/ZnFeAl 4:1 LDH, (67% removal efficiency after 60 minutes of process). Evolution of Ibuprofen degradation during the photo Fenton process is also studied using Ti/ZnFeAl 2:1 and 4:1 LDHs in the presence and absence of H2O2. It is found that after 60 min the use of Ti/ZnFeAl 4:1 LDH in presence of 100 mg/L H2O2 leads to the fastest degradation of Ibuprofen molecule. After 120 min, both catalysts Ti/ZnFeAl 4:1 and 2:1 result in the same value of removal efficiency (98%). In the absence of H2O2, Ibuprofen degradation reaches only 73% removal efficiency after 120 min of degradation process. Acknowledgements: This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS - UEFISCDI, project number PN-II-RU-TE-2014-4-0405.Keywords: layered double hydroxide, advanced oxidation process, micropollutant, heterogeneous Fenton
Procedia PDF Downloads 2292353 Moderate Electric Field Influence on Carotenoids Extraction Time from Heterochlorella luteoviridis
Authors: Débora P. Jaeschke, Eduardo A. Merlo, Rosane Rech, Giovana D. Mercali, Ligia D. F. Marczak
Abstract:
Carotenoids are high value added pigments that can be alternatively extracted from some microalgae species. However, the application of carotenoids synthetized by microalgae is still limited due to the utilization of organic toxic solvents. In this context, studies involving alternative extraction methods have been conducted with more sustainable solvents to replace and reduce the solvent volume and the extraction time. The aim of the present work was to evaluate the extraction time of carotenoids from the microalgae Heterochlorella luteoviridis using moderate electric field (MEF) as a pre-treatment to the extraction. The extraction methodology consisted of a pre-treatment in the presence of MEF (180 V) and ethanol (25 %, v/v) for 10 min, followed by a diffusive step performed for 50 min using a higher ethanol concentration (75 %, v/v). The extraction experiments were conducted at 30 °C and, to keep the temperature at this value, it was used an extraction cell with a water jacket that was connected to a water bath. Also, to enable the evaluation of MEF effect on the extraction, control experiments were performed using the same cell and conditions without voltage application. During the extraction experiments, samples were withdrawn at 1, 5 and 10 min of the pre-treatment and at 1, 5, 30, 40 and 50 min of the diffusive step. Samples were, then, centrifuged and carotenoids analyses were performed in the supernatant. Furthermore, an exhaustive extraction with ethyl acetate and methanol was performed, and the carotenoids content found for this analyses was considered as the total carotenoids content of the microalgae. The results showed that the application of MEF as a pre-treatment to the extraction influenced the extraction yield and the extraction time during the diffusive step; after the MEF pre-treatment and 50 min of the diffusive step, it was possible to extract up to 60 % of the total carotenoids content. Also, results found for carotenoids concentration of the extracts withdrawn at 5 and 30 min of the diffusive step did not presented statistical difference, meaning that carotenoids diffusion occurs mainly in the very beginning of the extraction. On the other hand, the results for control experiments showed that carotenoids diffusion occurs mostly during 30 min of the diffusive step, which evidenced MEF effect on the extraction time. Moreover, carotenoids concentration on samples withdrawn during the pre-treatment (1, 5 and 10 min) were below the quantification limit of the analyses, indicating that the extraction occurred in the diffusive step, when ethanol (75 %, v/v) was added to the medium. It is possible that MEF promoted cell membrane permeabilization and, when ethanol (75 %) was added, carotenoids interacted with the solvent and the diffusion occurred easily. Based on the results, it is possible to infer that MEF promoted the decrease of carotenoids extraction time due to the increasing of the permeability of the cell membrane which facilitates the diffusion from the cell to the medium.Keywords: moderate electric field (MEF), pigments, microalgae, ethanol
Procedia PDF Downloads 4632352 Barriers and Facilitators of Community Based Mental Health Intervention (CMHI) in Rural Bangladesh: Findings from a Descriptive Study
Authors: Rubina Jahan, Mohammad Zayeed Bin Alam, Sazzad Chowdhury, Sadia Chowdhury
Abstract:
Access to mental health services in Bangladesh is a tale of urban privilege and rural struggle. Mental health services in the country are primarily centered in urban medical hospitals, with only 260 psychiatrists for a population of more than 162 million, while rural populations face far more severe and daunting challenges. In alignment with the World Health Organization's perspective on mental health as a basic human right and a crucial component for personal, community, and socioeconomic development; SAJIDA Foundation a value driven non-government organization in Bangladesh has introduced a Community Based Mental Health (CMHI) program to fill critical gaps in mental health care, providing accessible and affordable community-based services to protect and promote mental health, offering support for those grappling with mental health conditions. The CMHI programme is being implemented in 3 districts in Bangladesh, 2 of them are remote and most climate vulnerable areas targeting total 6,797 individual. The intervention plan involves a screening of all participants using a 10-point vulnerability assessment tool to identify vulnerable individuals. The assumption underlying this is that individuals assessed as vulnerable is primarily due to biological, psychological, social and economic factors and they are at an increased risk of developing common mental health issues. Those identified as vulnerable with high risk and emergency conditions will receive Mental Health First Aid (MHFA) and undergo further screening with GHQ-12 to be identified as cases and non-cases. The identified cases are then referred to community lay counsellors with basic training and knowledge in providing 4-6 sessions on problem solving or behavior activation. In situations where no improvement occurs post lay counselling or for individuals with severe mental health conditions, a referral process will be initiated, directing individuals to ensure appropriate mental health care. In our presentation, it will present the findings from 6-month pilot implementation focusing on the community-based screening versus outcome of the lay counseling session and barriers and facilitators of implementing community based mental health care in a resource constraint country like Bangladesh.Keywords: community-based mental health, lay counseling, rural bangladesh, treatment gap
Procedia PDF Downloads 442351 Enhancing Patient Outcomes Through Quality Improvement: Reducing Contamination Rates in Karyotyping Samples via Effective Audits and Staff Engagement
Authors: Rofaida Ashour
Abstract:
This study discusses the implementation of quality improvement initiatives aimed at reducing contamination rates in cultured karyotyping samples. The primary objective was to enhance patient outcomes through systematic audits and targeted staff engagement. Recognizing the critical impact of sample integrity on diagnostic accuracy, a thorough analysis was conducted to identify the root causes of contamination. The project involved two audit cycles, which facilitated a comprehensive assessment of adherence to local protocols. Key issues identified included lapses in the use of personal protective equipment (PPE) and inadequate awareness of proper sample handling procedures among staff. To address these challenges, a multi-faceted approach was adopted. Firstly, a presentation was delivered to the laboratory team emphasizing the significance of strict adherence to PPE guidelines during the collection and handling of samples. This session aimed to raise awareness and foster a culture of safety within the unit. Additionally, informative posters illustrating the correct procedures were strategically placed around the laboratory to serve as ongoing visual reminders for staff. Recognizing the heightened risk associated with patients exhibiting fever or signs of infection, special measures were introduced to manage their sample collection. These proactive strategies were designed to minimize the likelihood of introducing contaminated samples into the culture process. The results of the audits demonstrated a significant reduction in contamination rates, underscoring the effectiveness of the interventions. This experience reinforced the importance of continuous quality improvement in healthcare settings, particularly in ensuring the delivery of high-quality, safe, and efficient services. Conducting regular audits not only provided valuable insights into operational practices but also highlighted the critical role of active team engagement and a data-driven approach in decision-making. Effective communication and collaboration among team members emerged as essential components for the success of quality improvement initiatives.Keywords: quality improvement, contamination rates, karyotyping samples, healthcare protocols, staff engagement
Procedia PDF Downloads 42350 Placebo Analgesia in Older Age: Evidence from Event-Related Potentials
Authors: Angelika Dierolf, K. Rischer, A. Gonzalez-Roldan, P. Montoya, F. Anton, M. Van der Meulen
Abstract:
Placebo analgesia is a powerful cognitive endogenous pain modulation mechanism with high relevance in pain treatment. Older people would benefit, especially from non-pharmacologic pain interventions, since this age group is disproportionately affected by acute and chronic pain, while pharmacological treatments are less suitable due to polypharmacy and age-related changes in drug metabolism. Although aging is known to affect neurobiological and physiological aspects of pain perception, as for example, changes in pain threshold and pain tolerance, its effects on cognitive pain modulation strategies, including placebo analgesia, have hardly been investigated so far. In the present study, we are assessing placebo analgesia in 35 older adults (60 years and older) and 35 younger adults (between 18 and 35 years). Acute pain was induced with short transdermal electrical pulses to the inner forearm, using a concentric stimulating electrode. Stimulation intensities were individually adjusted to the participant’s threshold. Next to the stimulation site, we applied sham transcutaneous electrical nerve stimulation (TENS). Participants were informed that sometimes the TENS device would be switched on (placebo condition), and sometimes it would be switched off (control condition). In reality, it was always switched off. Participants received alternating blocks of painful stimuli in the placebo and control condition and were asked to rate the intensity and unpleasantness of each stimulus on a visual analog scale (VAS). Pain-related evoked potentials were recorded with a 64-channel EEG. Preliminary results show a reduced placebo effect in older compared to younger adults in both behavioral and neurophysiological data. Older people experienced less subjective pain reduction under sham TENS treatment compared to younger adults, as evidenced by the VAS ratings. The N1 and P2 event-related potential components were generally reduced in the older group. While younger adults showed a reduced N1 and P2 under sham TENS treatment, this reduction was considerably smaller in older people. This reduced placebo effect in the older group suggests that cognitive pain modulation is altered in aging and may at least partly explain why older adults experience more pain. Our results highlight the need for a better understanding of the efficacy of non-pharmacological pain treatments in older adults and how these can be optimized to meet the specific requirements of this population.Keywords: placebo analgesia, aging, acute pain, TENS, EEG
Procedia PDF Downloads 1412349 Feminine Gender Identity in Nigerian Music Education: Trends, Challenges and Prospects
Authors: Julius Oluwayomi Oluwadamilare, Michael Olutayo Olatunji
Abstract:
In the African traditional societies, women have always played the role of a teacher, albeit informally. This is evident in the upbringing of their babies. As mothers, they also serve as the first teachers to teach their wards lessons through day-to-day activities. Furthermore, women always play the role of a musician during naming ceremonies, in the singing of lullabies, during initiation rites of adolescent boys and girls into adulthood, and in preparing their children especially daughters (and sons) for marriage. They also perform this role during religious and cultural activities, chieftaincy title/coronation ceremonies, singing of dirges during funeral ceremonies, and so forth. This traditional role of the African/Nigerian women puts them at a vantage point to contribute maximally to the teaching and learning of music at every level of education. The need for more women in the field of music education in Nigeria cannot be overemphasized. Today, gender equality is a major discourse in most countries of the world, Nigeria inclusive. Statistical data in the field of education and music education reveal the high ratio of male teachers/lecturers over their female counterparts in Nigerian tertiary institutions. The percentage is put at 80% Male and a distant 20% Female! This paper, therefore, examines feminine gender in Nigerian music education by tracing the involvement of women in musical practice from the pre-colonial to the post-colonial periods. The study employed both primary and secondary sources of data collection. The primary source included interviews conducted with 19 music lecturers from 8 purposively selected tertiary institutions from 4 geo-political zones of Nigeria. In addition, observation method was employed in the selected institutions. The results show, inter alia, that though there is a remarkable improvement in the rate of admission of female students into the music programme of Nigerian tertiary institutions, there is still an imbalance in the job placement in these institutions especially in the Colleges of Education which is the main focus of this research. Religious and socio-cultural factors are highly traceable to this development. This paper recommends the need for more female music teachers to be employed in the Nigerian tertiary institutions in line with the provisions stated in the Millennium Development Goals (MDGs) of the Federal Republic of Nigeria.Keywords: gender, education, music, women
Procedia PDF Downloads 2072348 Comparing Two Unmanned Aerial Systems in Determining Elevation at the Field Scale
Authors: Brock Buckingham, Zhe Lin, Wenxuan Guo
Abstract:
Accurate elevation data is critical in deriving topographic attributes for the precision management of crop inputs, especially water and nutrients. Traditional ground-based elevation data acquisition is time consuming, labor intensive, and often inconvenient at the field scale. Various unmanned aerial systems (UAS) provide the capability of generating digital elevation data from high-resolution images. The objective of this study was to compare the performance of two UAS with different global positioning system (GPS) receivers in determining elevation at the field scale. A DJI Phantom 4 Pro and a DJI Phantom 4 RTK(real-time kinematic) were applied to acquire images at three heights, including 40m, 80m, and 120m above ground. Forty ground control panels were placed in the field, and their geographic coordinates were determined using an RTK GPS survey unit. For each image acquisition using a UAS at a particular height, two elevation datasets were generated using the Pix4D stitching software: a calibrated dataset using the surveyed coordinates of the ground control panels and an uncalibrated dataset without using the surveyed coordinates of the ground control panels. Elevation values for each panel derived from the elevation model of each dataset were compared to the corresponding coordinates of the ground control panels. The coefficient of the determination (R²) and the root mean squared error (RMSE) were used as evaluation metrics to assess the performance of each image acquisition scenario. RMSE values for the uncalibrated elevation dataset were 26.613 m, 31.141 m, and 25.135 m for images acquired at 120 m, 80 m, and 40 m, respectively, using the Phantom 4 Pro UAS. With calibration for the same UAS, the accuracies were significantly improved with RMSE values of 0.161 m, 0.165, and 0.030 m, respectively. The best results showed an RMSE of 0.032 m and an R² of 0.998 for calibrated dataset generated using the Phantom 4 RTK UAS at 40m height. The accuracy of elevation determination decreased as the flight height increased for both UAS, with RMSE values greater than 0.160 m for the datasets acquired at 80 m and 160 m. The results of this study show that calibration with ground control panels improves the accuracy of elevation determination, especially for the UAS with a regular GPS receiver. The Phantom 4 Pro provides accurate elevation data with substantial surveyed ground control panels for the 40 m dataset. The Phantom 4 Pro RTK UAS provides accurate elevation at 40 m without calibration for practical precision agriculture applications. This study provides valuable information on selecting appropriate UAS and flight heights in determining elevation for precision agriculture applications.Keywords: unmanned aerial system, elevation, precision agriculture, real-time kinematic (RTK)
Procedia PDF Downloads 1652347 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms
Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson
Abstract:
This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection
Procedia PDF Downloads 4652346 The Thoughts and Feelings of 60-72 Month Old Children about School and Teacher
Authors: Ayse Ozturk Samur, Gozde Inal Kiziltepe
Abstract:
No matter what level of education it is, starting a school is an exciting process as it includes new experiences. In this process, child steps into a different environment and institution except from the family institution which he was born into and feels secure. That new environment is different from home; it is a social environment which has its own rules, and involves duties and responsibilities that should be fulfilled and new vital experiences. The children who have a positive attitude towards school and like school are more enthusiastic and eager to participate in classroom activities. Moreover, a close relationship with the teacher enables the child to have positive emotions and ideas about the teacher and school and helps children adapt to school easily. In this study, it is aimed to identify children’s perceptions of academic competence, attitudes towards school and ideas about their teachers. In accordance with the aim a mixed method that includes both qualitative and quantitative data collection methods are used. The study is supported with qualitative data after collecting quantitative data. The study group of the research consists of randomly chosen 250 children who are 60-72 month old and attending a preschool institution in a city center located West Anatolian region of Turkey. Quantitative data was collected using Feelings about School scale. The scale consists of 12 items and 4 dimensions; school, teacher, mathematic, and literacy. Reliability and validity study for the scale used in the study was conducted by the researchers with 318 children who were 60-72 months old. For content validity experts’ ideas were asked, for construct validity confirmatory factor analysis was utilized. Reliability of the scale was examined by calculating internal consistency coefficient (Cronbach alpha). At the end of the analyses it was found that FAS is a valid and reliable instrument to identify 60-72 month old children’ perception of their academic competency, attitude toward school and ideas about their teachers. For the qualitative dimension of the study, semi-structured interviews were done with 30 children aged 60-72 month. At the end of the study, it was identified that children’s’ perceptions of their academic competencies and attitudes towards school was medium-level and their ideas about their teachers were high. Based on the semi structured interviews done with children, it is identified that they have a positive perception of school and teacher. That means quantitatively gathered data is supported by qualitatively collected data.Keywords: feelings, preschool education, school, teacher, thoughts
Procedia PDF Downloads 2252345 Application of Neuroscience in Aligning Instructional Design to Student Learning Style
Authors: Jayati Bhattacharjee
Abstract:
Teaching is a very dynamic profession. Teaching Science is as much challenging as Learning the subject if not more. For instance teaching of Chemistry. From the introductory concepts of subatomic particles to atoms of elements and their symbols and further presenting the chemical equation and so forth is a challenge on both side of the equation Teaching Learning. This paper combines the Neuroscience of Learning and memory with the knowledge of Learning style (VAK) and presents an effective tool for the teacher to authenticate Learning. The model of ‘Working Memory’, the Visio-spatial sketchpad, the central executive and the phonological loop that transforms short-term memory to long term memory actually supports the psychological theory of Learning style i.e. Visual –Auditory-Kinesthetic. A closer examination of David Kolbe’s learning model suggests that learning requires abilities that are polar opposites, and that the learner must continually choose which set of learning abilities he or she will use in a specific learning situation. In grasping experience some of us perceive new information through experiencing the concrete, tangible, felt qualities of the world, relying on our senses and immersing ourselves in concrete reality. Others tend to perceive, grasp, or take hold of new information through symbolic representation or abstract conceptualization – thinking about, analyzing, or systematically planning, rather than using sensation as a guide. Similarly, in transforming or processing experience some of us tend to carefully watch others who are involved in the experience and reflect on what happens, while others choose to jump right in and start doing things. The watchers favor reflective observation, while the doers favor active experimentation. Any lesson plan based on the model of Prescriptive design: C+O=M (C: Instructional condition; O: Instructional Outcome; M: Instructional method). The desired outcome and conditions are independent variables whereas the instructional method is dependent hence can be planned and suited to maximize the learning outcome. The assessment for learning rather than of learning can encourage, build confidence and hope amongst the learners and go a long way to replace the anxiety and hopelessness that a student experiences while learning Science with a human touch in it. Application of this model has been tried in teaching chemistry to high school students as well as in workshops with teachers. The response received has proven the desirable results.Keywords: working memory model, learning style, prescriptive design, assessment for learning
Procedia PDF Downloads 3512344 Anti-Leishmanial Compounds from the Seaweed Padina pavonica
Authors: Nahal Najafi, Afsaneh Yegdaneh, Sedigheh Saberi
Abstract:
Introduction: Leishmaniasis poses a substantial global risk, affecting millions and resulting in thousands of cases each year in endemic regions. Challenges in current leishmaniasis treatments include drug resistance, high toxicity, and pancreatitis. Marine compounds, particularly brown algae, serve as a valuable source of inspiration for discovering treatments against Leishmania. Material and method: Padina pavonica was collected from the Persian Gulf. The seaweeds were dried and extracted with methanol: ethylacetate (1:1). The extract was partitioned to hexane (Hex), dicholoromethane (DCM), butanol, and water by Kupchan partitioning method. Hex partition was fractionated by silica gel column chromatography to 10 fractions (Fr. 1-10). Fr. 6 was further separated by the normal phase HPLC method to yield compounds 1-3. The structures of isolated compounds were elucidated by NMR, Mass, and other spectroscopic methods. Hex and DCM partitions, Fr. 6 and compounds 1-3, were tested for anti-leishmanicidal activity. RAW cell lines were cultured in enriched RPMI (10% FBS, 1% pen-strep) in a 37°C CO2 5% incubator, while promastigote cells were initially cultured in NNN culture and subsequently transferred to the aforementioned medium. Cytotoxicity was assessed using MTT tests, anti-promastigote activity was evaluated through Hemocytometer chamber promastigote counting, and the impact of amastigote damage was determined by counting amastigotes within 100 macrophages. Results: NMR and Mass identified isolated compounds as fucosterol and two sulfoquinovosyldiacylglycerols (SQDG). Among the samples tested, Fr.6 exhibited the highest cytotoxicity (CC50=60.24), while compound 2 showed the lowest cytotoxicity (CC50=21984). Compound 1 and dichloromethane fraction demonstrated the highest and lowest anti-promastigote activity (IC50=115.7, IC50=16.42, respectively), and compound 1 and hexane fraction exhibited the highest and lowest anti-amastigote activity (IC50=7.874, IC50=40.18, respectively). Conclusion: All six samples, including Hex and DCM partitions, Fr.6, and compounds 1-3, demonstrate a noteworthy correlation between rising concentration and time, with a statistically significant P-value of ≤0.05. Considering the higher selectivity index of compound 2 compared to others, it can be inferred that the presence of sulfur groups and unsaturated chains potentially contributes to these effects by impeding the DNA polymerase, which, of course, needs more research.Keywords: Padina, leishmania, sulfoquinovosyldiacylglycerol, cytotoxicity
Procedia PDF Downloads 222343 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap
Authors: Nikolai N. Bogolubov, Andrey V. Soldatov
Abstract:
Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom
Procedia PDF Downloads 2712342 Assessment of the State of Hygiene in a Tunisian Hospital Kitchen: Interest of Mycological and Parasitological Samples from Food Handlers and Environment
Authors: Bouchekoua Myriam, Aloui Dorsaf, Trabelsi Sonia
Abstract:
Introduction Food hygiene in hospitals is important, particularly among patients who could be more vulnerable than healthy subjects to microbiological and nutritional risks. The consumption of contaminated food may be responsible for foodborne diseases, which can be severe among hospitalized patients, especially those immunocompromised. The aim of our study was to assess the state of hygiene in the internal catering department of a Tunisian hospital. Methodology and major results: A prospective study was conducted for one year in the Parasitology-Mycology laboratory of Charles Nicolle Hospital. Samples were taken from the kitchen staff, worktops, and cooking utensils used in the internal catering department. Thirty one employees have benefited from stool exams and scotch tape in order to evaluate the degree of infestation of parasites. 35% of stool exams were positive. Protozoa were the only parasites detected. Blastocystis sp was the species mostly found in nine food handlers. Its role as a human pathogen is still controversial. Pathogenic protozoa were detected in two food handlers (Giardia intestinalis in one person and Dientamoeba fragilis in the other one. Non-pathogenic protozoa were found in two cases; among them, only one had digestive symptoms without a statistically significant association with the carriage of intestinal parasites. Moreover, samples were performed from the hands of the staff in order to search for a fungal carriage. Thus, 25 employees (81%) were colonized by fungi, including molds. Besides, mycological examination among food handlers with a suspected dermatomycosis for diagnostic confirmation concluded foot onychomycosis in 32% of cases and interdigital intertrigo in 26%. Only one person had hand onychomycosis. Among the 17 samples taken from worktops and kitchen utensils, fungal contamination was detected in 13 sites. Hot and cold equipment were the most contaminated. Molds were mainly identified as belonging to five different genera. Cladosporium sp was predominant. Conclusion: In the view of the importance of intestinal parasites among food handlers, the intensity of fungi hand carriage among these employees, and the high level of fungal contamination in worktops and kitchen utensils, a reinforcement of hygiene measures is more than essential in order to minimize the alimentary contamination-risk.Keywords: hospital kitchen, environment, intestinal parasitosis, fungal carriage, fungal contamination
Procedia PDF Downloads 1172341 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 2402340 Performance Evaluation of the CSAN Pronto Point-of-Care Whole Blood Analyzer for Regular Hematological Monitoring During Clozapine Treatment
Authors: Farzana Esmailkassam, Usakorn Kunanuvat, Zahraa Mohammed Ali
Abstract:
Objective: The key barrier in Clozapine treatment of treatment-resistant schizophrenia (TRS) includes frequent bloods draws to monitor neutropenia, the main drug side effect. WBC and ANC monitoring must occur throughout treatment. Accurate WBC and ANC counts are necessary for clinical decisions to halt, modify or continue clozapine treatment. The CSAN Pronto point-of-care (POC) analyzer generates white blood cells (WBC) and absolute neutrophils (ANC) through image analysis of capillary blood. POC monitoring offers significant advantages over central laboratory testing. This study evaluated the performance of the CSAN Pronto against the Beckman DxH900 Hematology laboratory analyzer. Methods: Forty venous samples (EDTA whole blood) with varying concentrations of WBC and ANC as established on the DxH900 analyzer were tested in duplicates on three CSAN Pronto analyzers. Additionally, both venous and capillary samples were concomitantly collected from 20 volunteers and assessed on the CSAN Pronto and the DxH900 analyzer. The analytical performance including precision using liquid quality controls (QCs) as well as patient samples near the medical decision points, and linearity using a mix of high and low patient samples to create five concentrations was also evaluated. Results: In the precision study for QCs and whole blood, WBC and ANC showed CV inside the limits established according to manufacturer and laboratory acceptability standards. WBC and ANC were found to be linear across the measurement range with a correlation of 0.99. WBC and ANC from all analyzers correlated well in venous samples on the DxH900 across the tested sample ranges with a correlation of > 0.95. Mean bias in ANC obtained on the CSAN pronto versus the DxH900 was 0.07× 109 cells/L (95% L.O.A -0.25 to 0.49) for concentrations <4.0 × 109 cells/L, which includes decision-making cut-offs for continuing clozapine treatment. Mean bias in WBC obtained on the CSAN pronto versus the DxH900 was 0.34× 109 cells/L (95% L.O.A -0.13 to 0.72) for concentrations <5.0 × 109 cells/L. The mean bias was higher (-11% for ANC, 5% for WBC) at higher concentrations. The correlations between capillary and venous samples showed more variability with mean bias of 0.20 × 109 cells/L for the ANC. Conclusions: The CSAN pronto showed acceptable performance in WBC and ANC measurements from venous and capillary samples and was approved for clinical use. This testing will facilitate treatment decisions and improve clozapine uptake and compliance.Keywords: absolute neutrophil counts, clozapine, point of care, white blood cells
Procedia PDF Downloads 952339 Antigen Stasis can Predispose Primary Ciliary Dyskinesia (PCD) Patients to Asthma
Authors: Nadzeya Marozkina, Joe Zein, Benjamin Gaston
Abstract:
Introduction: We have observed that many patients with Primary Ciliary Dyskinesia (PCD) benefit from asthma medications. In healthy airways, the ciliary function is normal. Antigens and irritants are rapidly cleared, and NO enters the gas phase normally to be exhaled. In the PCD airways, however, antigens, such as Dermatophagoides, are not as well cleared. This defect leads to oxidative stress, marked by increased DUOX1 expression and decreased superoxide dismutase [SOD] activity (manuscript under revision). H₂O₂, in high concentrations in the PCD airway, injures the airway. NO is oxidized rather than being exhaled, forming cytotoxic peroxynitrous acid. Thus, antigen stasis on PCD airway epithelium leads to airway injury and may predispose PCD patients to asthma. Indeed, recent population genetics suggest that PCD genes may be associated with asthma. We therefore hypothesized that PCD patients would be predisposed to having asthma. Methods. We analyzed our database of 18 million individual electronic medical records (EMRs) in the Indiana Network for Patient Care research database (INPCR). There is not an ICD10 code for PCD itself; code Q34.8 is most commonly used clinically. To validate analysis of this code, we queried patients who had an ICD10 code for both bronchiectasis and situs inversus totalis in INPCR. We also studied a validation cohort using the IBM Explorys® database (over 80 million individuals). Analyses were adjusted for age, sex and race using a 1 PCD: 3 controls matching method in INPCR and multivariable logistic regression in the IBM Explorys® database. Results. The prevalence of asthma ICD10 codes in subjects with a code Q34.8 was 67% vs 19% in controls (P < 0.0001) (Regenstrief Institute). Similarly, in IBM*Explorys, the OR [95% CI] for having asthma if a patient also had ICD10 code 34.8, relative to controls, was =4.04 [3.99; 4.09]. For situs inversus alone the OR [95% CI] was 4.42 [4.14; 4.71]; and bronchiectasis alone the OR [95% CI] =10.68 (10.56; 10.79). For both bronchiectasis and situs inversus together, the OR [95% CI] =28.80 (23.17; 35.81). Conclusions: PCD causes antigen stasis in the human airway (under review), likely predisposing to asthma in addition to oxidative and nitrosative stress and to airway injury. Here, we show that, by several different population-based metrics, and using two large databases, patients with PCD appear to have between a three- and 28-fold increased risk of having asthma. These data suggest that additional studies should be undertaken to understand the role of ciliary dysfunction in the pathogenesis and genetics of asthma. Decreased antigen clearance caused by ciliary dysfunction may be a risk factor for asthma development.Keywords: antigen, PCD, asthma, nitric oxide
Procedia PDF Downloads 1072338 Using Photogrammetric Techniques to Map the Mars Surface
Authors: Ahmed Elaksher, Islam Omar
Abstract:
For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.Keywords: mars, photogrammetry, MOLA, HiRISE
Procedia PDF Downloads 572337 In vitro Evaluation of Capsaicin Patches for Transdermal Drug Delivery
Authors: Alija Uzunovic, Sasa Pilipovic, Aida Sapcanin, Zahida Ademovic, Berina Pilipović
Abstract:
Capsaicin is a naturally occurring alkaloid extracted from capsicum fruit extracts of different of Capsicum species. It has been employed topically to treat many diseases such as rheumatoid arthritis, osteoarthritis, cancer pain and nerve pain in diabetes. The high degree of pre-systemic metabolism of intragastrical capsaicin and the short half-life of capsaicin by intravenous administration made topical application of capsaicin advantageous. In this study, we have evaluated differences in the dissolution characteristics of capsaicin patch 11 mg (purchased from market) at different dissolution rotation speed. The proposed patch area is 308 cm2 (22 cm x 14 cm; it contains 36 µg of capsaicin per square centimeter of adhesive). USP Apparatus 5 (Paddle Over Disc) is used for transdermal patch testing. The dissolution study was conducted using USP apparatus 5 (n=6), ERWEKA DT800 dissolution tester (paddle-type) with addition of a disc. The fabricated patch of 308 cm2 is to be cut into 9 cm2 was placed against a disc (delivery side up) retained with the stainless-steel screen and exposed to 500 mL of phosphate buffer solution pH 7.4. All dissolution studies were carried out at 32 ± 0.5 °C and different rotation speed (50± 5; 100± 5 and 150± 5 rpm). 5 ml aliquots of samples were withdrawn at various time intervals (1, 4, 8 and 12 hours) and replaced with 5 ml of dissolution medium. Withdrawn were appropriately diluted and analyzed by reversed-phase liquid chromatography (RP-LC). A Reversed Phase Liquid Chromatography (RP-LC) method has been developed, optimized and validated for the separation and quantitation of capsaicin in a transdermal patch. The method uses a ProntoSIL 120-3-C18 AQ 125 x 4,0 mm (3 μm) column maintained at 600C. The mobile phase consisted of acetonitrile: water (50:50 v/v), the flow rate of 0.9 mL/min, the injection volume 10 μL and the detection wavelength 222 nm. The used RP-LC method is simple, sensitive and accurate and can be applied for fast (total chromatographic run time was 4.0 minutes) and simultaneous analysis of capsaicin and dihydrocapsaicin in a transdermal patch. According to the results obtained in this study, we can conclude that the relative difference of dissolution rate of capsaicin after 12 hours was elevated by increase of dissolution rotation speed (100 rpm vs 50 rpm: 84.9± 11.3% and 150 rpm vs 100 rpm: 39.8± 8.3%). Although several apparatus and procedures (USP apparatus 5, 6, 7 and a paddle over extraction cell method) have been used to study in vitro release characteristics of transdermal patches, USP Apparatus 5 (Paddle Over Disc) could be considered as a discriminatory test. would be able to point out the differences in the dissolution rate of capsaicin at different rotation speed.Keywords: capsaicin, in vitro, patch, RP-LC, transdermal
Procedia PDF Downloads 2282336 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment
Authors: Ella Sèdé Maforikan
Abstract:
Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment
Procedia PDF Downloads 632335 Diversity, Biochemical and Genomic Assessment of Selected Benthic Species of Two Tropical Lagoons, Southwest Nigeria
Authors: G. F. Okunade, M. O. Lawal, R. E. Uwadiae, D. Portnoy
Abstract:
The diversity, physico-chemical, biochemical and genomics assessment of Macrofauna species of Ologe and Badagry Lagoons were carried out between August 2016 and July 2018. The concentrations of Fe, Zn, Mn, Cd, Cr, and Pb in water were determined by Atomic Absorption Spectrophotometer (AAS). Particle size distribution was determined with wet-sieving and sedimentation using hydrometer method. Genomics analyses were carried using 25 P. fusca (quadriseriata) and 25 P.fusca from each lagoon due to abundance in both lagoons all through the two years of collection. DNA was isolated from each sample using the Mag-Bind Blood and Tissue DNA HD 96 kit; a method designed to isolate high quality. The biochemical characteristics were analysed in the dominanat species (P.aurita and T. fuscatus) using ELISA kits. Physico-chemical parameters such as pH, total dissolved solids, dissolved oxygen, conductivity and TDS were analysed using APHA standard protocols. The Physico-chemical parameters of the water quality recorded with mean values of 32.46 ± 0.66mg/L and 41.93 ± 0.65 for COD, 27.28 ± 0.97 and 34.82 ± 0.1 mg/L for BOD, 0.04 ± 4.71 mg/L for DO, 6.65 and 6.58 for pH in Ologe and Badagry lagoons with significant variations (p ≤ 0.05) across seasons. The mean and standard deviation of salinity for Ologe and Badagry Lagoons ranged from 0.43 ± 0.30 to 0.27 ± 0.09. A total of 4210 species belonging to a phylum, two classes, four families and a total of 2008 species in Ologe lagoon while a phylum, two classes, 5 families and a total of 2202 species in Badagry lagoon. The percentage composition of the classes at Ologe lagoon had 99% gastropod and 1% bivalve, while Gastropod contributed 98.91% and bivalve 1.09% in Badagry lagoon. Particle size was distributed in 0.002mm to 2.00mm, particle size distribution in Ologe lagoon recorded 0.83% gravels, 97.83% sand, and 1.33% silt particles while Badagry lagoon recorded 7.43% sand, 24.71% silt, and 67.86% clay particles hence, the excessive dredging activities going on in the lagoon. Maximum percentage of sand (100%) was seen in station 6 in Ologe lagoon while the minimum (96%) was found in station 1. P. aurita (Ologe Lagoon) and T. fuscastus (Badagry Lagoon) were the most abundant benthic species in which both contributed 61.05% and 64.35%, respectively. The enzymatic activities of P. aurita observed with mean values of 21.03 mg/dl for AST, 10.33 mg/dl for ALP, 82.16 mg/dl for ALT and 73.06 mg/dl for CHO in Ologe Lagoon While T. fuscatus observed mean values of Badagry Lagoon) recorded mean values 29.76 mg/dl, ALP with 11.69mg/L, ALT with 140.58 mg/dl and CHO with 45.98 mg/dl. There were significant variations (P < 0.05) in AST and CHO levels of activities in the muscles of the species.Keywords: benthos, biochemical responses, genomics, metals, particle size
Procedia PDF Downloads 1262334 The Response of Mammal Populations to Abrupt Changes in Fire Regimes in Montane Landscapes of South-Eastern Australia
Authors: Jeremy Johnson, Craig Nitschke, Luke Kelly
Abstract:
Fire regimes, climate and topographic gradients interact to influence ecosystem structure and function across fire-prone, montane landscapes worldwide. Biota have developed a range of adaptations to historic fire regime thresholds, which allow them to persist in these environments. In south-eastern Australia, a signal of fire regime changes is emerging across these landscapes, and anthropogenic climate change is likely to be one of the main drivers of an increase in burnt area and more frequent wildfire over the last 25 years. This shift has the potential to modify vegetation structure and composition at broad scales, which may lead to landscape patterns to which biota are not adapted, increasing the likelihood of local extirpation of some mammal species. This study aimed to address concerns related to the influence of abrupt changes in fire regimes on mammal populations in montane landscapes. It first examined the impact of climate, topography, and vegetation on fire patterns and then explored the consequences of these changes on mammal populations and their habitats. Field studies were undertaken across diverse vegetation, fire severity and fire frequency gradients, utilising camera trapping and passive acoustic monitoring methodologies and the collection of fine-scale vegetation data. Results show that drought is a primary contributor to fire regime shifts at the landscape scale, while topographic factors have a variable influence on wildfire occurrence at finer scales. Frequent, high severity wildfire influenced forest structure and composition at broad spatial scales, and at fine scales, it reduced occurrence of hollow-bearing trees and promoted coarse woody debris. Mammals responded differently to shifts in forest structure and composition depending on their habitat requirements. This study highlights the complex interplay between fire regimes, environmental gradients, and biotic adaptations across temporal and spatial scales. It emphasizes the importance of understanding complex interactions to effectively manage fire-prone ecosystems in the face of climate change.Keywords: fire, ecology, biodiversity, landscape ecology
Procedia PDF Downloads 732333 Association between Cholesterol Levels and Atopy among Adolescents with and without Sufficient Amount of Physical Activity
Authors: Keith T. S. Tung, H. W. Tsang, Rosa S. Wong, Frederick K. Ho, Patrick Ip
Abstract:
Objectives: Atopic diseases are increasingly prevalent among children and adolescents, both locally and internationally. One of the possible contributing factors could be the hypercholesterolemia which leads to cholesterol accumulation in macrophages and other immune cells that would eventually promote inflammatory responses, including augmentation of toll-like receptor (TLR). Meanwhile, physical activity is well known for its beneficial effects against the condition of hypercholesterolemia and incidence of atopic diseases. This study, therefore, explored whether atopic diseases were associated with increased cholesterol levels and whether physical activity habit influenced this association. Methods: This is a sub-study derived from the longitudinal cohort study which recruited a group of children at five years of age in Kindergarten 3 (K3) to investigate the long-term impact of family socioeconomic status on child development. In 2018/19, adolescents (average age: 13 years old) were asked to report their physical activity habit and history of any atopic diseases. During health assessment, peripheral blood samples were collected from the adolescents to study their lipid profile [total cholesterol, high-density lipoprotein (HDL)-cholesterol, and low-density lipoprotein (LDL)-cholesterol]. Regression analyses were performed to test the relationships between variables of interest. Results: Among the 315 adolescents, 99 (31.4%) reported to have allergic rhinitis. There were 45 (14.3%) with eczema, 17 (5.4%) with a food allergy, and 12 (3.8%) with asthma. Regression analyses showed that adolescents with a history of any type of atopic diseases had significantly higher total cholesterol (B=13.3, p < 0.01) and LDL cholesterol (B=7.9, p < 0.05) levels. Further subgroup analyses were conducted to examine the effect of physical activity level on the association between atopic diseases and cholesterol levels. We found stronger associations among those who did not meet the World Health Organization recommendation of at least 60 minutes of moderate-to-vigorous activities each day (total cholesterol: B=15.5, p < 0.01; LDL cholesterol: B=10.4, p < 0.05). For those who met this recommendation, the associations between atopic diseases and cholesterol levels became insignificant. Conclusion: Our study results support the current research evidence on the relationship between an elevated level of cholesterol and atopic diseases. More importantly, our results provide preliminary support for the protective effect of regular exercises against elevated cholesterol level due to atopic diseases. The findings highlight the importance of a healthy lifestyle for keeping cholesterol levels in the normal range, which can bring benefits to both physical and mental health.Keywords: atopic diseases, Chinese adolescents, cholesterol level, physical activity
Procedia PDF Downloads 1202332 Human Identification Using Local Roughness Patterns in Heartbeat Signal
Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori
Abstract:
Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification
Procedia PDF Downloads 4042331 Towards Modern Approaches of Intelligence Measurement for Clinical and Educational Practices
Authors: Alena Kulikova, Tatjana Kanonire
Abstract:
Intelligence research is one of the oldest fields of psychology. Many factors have made a research on intelligence, defined as reasoning and problem solving [1, 2], a very acute and urgent problem. Thus, it has been repeatedly shown that intelligence is a predictor of academic, professional, and social achievement in adulthood (for example, [3]); Moreover, intelligence predicts these achievements better than any other trait or ability [4]. The individual level, a comprehensive assessment of intelligence is a necessary criterion for the diagnosis of various mental conditions. For example, it is a necessary condition for psychological, medical and pedagogical commissions when deciding on educational needs and the most appropriate educational programs for school children. Assessment of intelligence is crucial in clinical psychodiagnostic and needs high-quality intelligence measurement tools. Therefore, it is not surprising that the development of intelligence tests is an essential part of psychological science and practice. Many modern intelligence tests have a long history and have been used for decades, for example, the Stanford-Binet test or the Wechsler test. However, the vast majority of these tests are based on the classic linear test structure, in which all respondents receive all tasks (see, for example, a critical review by [5]). This understanding of the testing procedure is a legacy of the pre-computer era, in which blank testing was the only diagnostic procedure available [6] and has some significant limitations that affect the reliability of the data obtained [7] and increased time costs. Another problem with measuring IQ is that classical line-structured tests do not fully allow to measure respondent's intellectual progress [8], which is undoubtedly a critical limitation. Advances in modern psychometrics allow for avoiding the limitations of existing tools. However, as in any rapidly developing industry, at the moment, psychometrics does not offer ready-made and straightforward solutions and requires additional research. In our presentation we would like to discuss the strengths and weaknesses of the current approaches to intelligence measurement and highlight “points of growth” for creating a test in accordance with modern psychometrics. Whether it is possible to create the instrument that will use all achievements of modern psychometric and remain valid and practically oriented. What would be the possible limitations for such an instrument? The theoretical framework and study design to create and validate the original Russian comprehensive computer test for measuring the intellectual development in school-age children will be presented.Keywords: Intelligence, psychometrics, psychological measurement, computerized adaptive testing, multistage testing
Procedia PDF Downloads 802330 Investigation and Monitoring Method of Vector Density in Kaohsiung City
Authors: Chiu-Wen Chang, I-Yun Chang, Wei-Ting Chen, Hui-Ping Ho, Chao-Ying Pan, Joh-Jong Huang
Abstract:
Dengue is a ‘community disease’ or ‘environmental disease’, as long as the environment exist suitable container (including natural and artificial) for mosquito breeding, once the virus invade will lead to the dengue epidemic. Surveillance of vector density is critical to effective infectious disease control and play an important role in monitoring the dynamics of mosquitoes in community, such as mosquito species, density, distribution area. The objective of this study was to examine the relationship in vector density survey (Breteau index, Adult index, House index, Container index, and Larvae index) form 2014 to 2016 in Kaohsiung City and evaluate the effects of introducing the Breeding Elimination and Appraisal Team (hereinafter referred to as BEAT) as an intervention measure on eliminating dengue vector breeding site started from May 2016. BEAT were performed on people who were suspected of contracting dengue fever, a surrounding area measuring 50 meters by 50 meters was demarcated as the emergency prevention and treatment zone. BEAT would perform weekly vector mosquito inspections and vector mosquito inspections in regions with a high Gravitrap index and assign a risk assessment index to each region. These indices as well as the prevention and treatment results were immediately reported to epidemic prevention-related units every week. The results indicated that, vector indices from 2014 to 2016 showed no statistically significant differences in the Breteau index, adult index, and house index (p > 0.05) but statistically significant differences in the container index and larvae index (p <0.05). After executing the integrated elimination work, container index and larvae index are statistically significant different from 2014 to 2016 in the (p < 0.05). A post hoc test indicated that the container index of 2014 (M = 12.793) was significantly higher than that of 2016 (M = 7.631), and that the larvae index of 2015 (M = 34.065) was significantly lower than that of 2014 (M = 66.867). The results revealed that effective vector density surveillance could highlight the focus breeding site and then implement the immediate control action (BEAT), which successfully decreased the vector density and the risk of dengue epidemic.Keywords: Breteau index, dengue control, monitoring method, vector density
Procedia PDF Downloads 1982329 Use of Shipping Containers as Office Buildings in Brazil: Thermal and Energy Performance for Different Constructive Options and Climate Zones
Authors: Lucas Caldas, Pablo Paulse, Karla Hora
Abstract:
Shipping containers are present in different Brazilian cities, firstly used for transportation purposes, but which become waste materials and an environmental burden in their end-of-life cycle. In the last decade, in Brazil, some buildings made partly or totally from shipping containers started to appear, most of them for commercial and office uses. Although the use of a reused container for buildings seems a sustainable solution, it is very important to measure the thermal and energy aspects when they are used as such. In this context, this study aims to evaluate the thermal and energy performance of an office building totally made from a 12-meter-long, High Cube 40’ shipping container in different Brazilian Bioclimatic Zones. Four different constructive solutions, mostly used in Brazil were chosen: (1) container without any covering; (2) with internally insulated drywall; (3) with external fiber cement boards; (4) with both drywall and fiber cement boards. For this, the DesignBuilder with EnergyPlus was used for the computational simulation in 8760 hours. The EnergyPlus Weather File (EPW) data of six Brazilian capital cities were considered: Curitiba, Sao Paulo, Brasilia, Campo Grande, Teresina and Rio de Janeiro. Air conditioning appliance (split) was adopted for the conditioned area and the cooling setpoint was fixed at 25°C. The coefficient of performance (CoP) of air conditioning equipment was set as 3.3. Three kinds of solar absorptances were verified: 0.3, 0.6 and 0.9 of exterior layer. The building in Teresina presented the highest level of energy consumption, while the one in Curitiba presented the lowest, with a wide range of differences in results. The constructive option of external fiber cement and drywall presented the best results, although the differences were not significant compared to the solution using just drywall. The choice of absorptance showed a great impact in energy consumption, mainly compared to the case of containers without any covering and for use in the hottest cities: Teresina, Rio de Janeiro, and Campo Grande. This study brings as the main contribution the discussion of constructive aspects for design guidelines for more energy-efficient container buildings, considering local climate differences, and helps the dissemination of this cleaner constructive practice in the Brazilian building sector.Keywords: bioclimatic zones, Brazil, shipping containers, thermal and energy performance
Procedia PDF Downloads 174