Search results for: joint amplitude measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4168

Search results for: joint amplitude measurement

238 Applying the Global Trigger Tool in German Hospitals: A Retrospective Study in Surgery and Neurosurgery

Authors: Mareen Brosterhaus, Antje Hammer, Steffen Kalina, Stefan Grau, Anjali A. Roeth, Hany Ashmawy, Thomas Gross, Marcel Binnebosel, Wolfram T. Knoefel, Tanja Manser

Abstract:

Background: The identification of critical incidents in hospitals is an essential component of improving patient safety. To date, various methods have been used to measure and characterize such critical incidents. These methods are often viewed by physicians and nurses as external quality assurance, and this creates obstacles to the reporting events and the implementation of recommendations in practice. One way to overcome this problem is to use tools that directly involve staff in measuring indicators of quality and safety of care in the department. One such instrument is the global trigger tool (GTT), which helps physicians and nurses identify adverse events by systematically reviewing randomly selected patient records. Based on so-called ‘triggers’ (warning signals), indications of adverse events can be given. While the tool is already used internationally, its implementation in German hospitals has been very limited. Objectives: This study aimed to assess the feasibility and potential of the global trigger tool for identifying adverse events in German hospitals. Methods: A total of 120 patient records were randomly selected from two surgical, and one neurosurgery, departments of three university hospitals in Germany over a period of two months per department between January and July, 2017. The records were reviewed using an adaptation of the German version of the Institute for Healthcare Improvement Global Trigger Tool to identify triggers and adverse event rates per 1000 patient days and per 100 admissions. The severity of adverse events was classified using the National Coordinating Council for Medication Error Reporting and Prevention. Results: A total of 53 adverse events were detected in the three departments. This corresponded to adverse event rates of 25.5-72.1 per 1000 patient-days and from 25.0 to 60.0 per 100 admissions across the three departments. 98.1% of identified adverse events were associated with non-permanent harm without (Category E–71.7%) or with (Category F–26.4%) the need for prolonged hospitalization. One adverse event (1.9%) was associated with potentially permanent harm to the patient. We also identified practical challenges in the implementation of the tool, such as the need for adaptation of the global trigger tool to the respective department. Conclusions: The global trigger tool is feasible and an effective instrument for quality measurement when adapted to the departmental specifics. Based on our experience, we recommend a continuous use of the tool thereby directly involving clinicians in quality improvement.

Keywords: adverse events, global trigger tool, patient safety, record review

Procedia PDF Downloads 222
237 The Role of a Specialized Diet for Management of Fibromyalgia Symptoms: A Systematic Review

Authors: Siddhant Yadav, Rylea Ranum, Hannah Alberts, Abdul Kalaiger, Brent Bauer, Ryan Hurt, Ann Vincent, Loren Toussaint, Sanjeev Nanda

Abstract:

Background and significance: Fibromyalgia (FM) is a chronic pain disorder also characterized by chronic fatigue, morning stiffness, sleep, and cognitive symptoms, psychological disturbances (anxiety, depression), and is comorbid with multiple medical and psychiatric conditions. It has an incidence of 2-4% in the general population and is reported more commonly in women. Oxidative stress and inflammation are thought to contribute to pain in patients with FM, and the adoption of an antioxidant/anti-inflammatory diet has been suggested as a modality to alleviate symptoms. The aim of this systematic review was to evaluate the efficacy of specialized diets (ketogenic, gluten free, Mediterranean, and low carbohydrate) in improving FM symptoms. Methodology: A comprehensive search of the following databases from inception to July 15th, 2021, was conducted: Ovid MEDLINE and Epub ahead of print, in-process and other non-indexed citations and daily, Ovid Embase, Ovid EBM reviews, Cochrane central register of controlled trials, EBSCO host CINAHL with full text, Elsevier Scopus, website and citation index, web of science emerging sources citation and clinicaltrials.gov. We included randomized controlled trials, non-randomized experimental studies, cross-sectional studies, cohort studies, case series, and case reports in adults with fibromyalgia. The risk of bias was assessed with the Agency for Health Care Research and Quality designed, specific recommended criteria (AHRQ). Results: Thirteen studies were eligible for inclusion. This included a total of 761 participants. Twelve out of the 13 studies reported improvement in widespread body pain, joint stiffness, sleeping pattern, mood, and gastrointestinal symptoms, and one study reported no changes in symptomatology in patients with FM on specialized diets. None of the studies showed the worsening of symptoms associated with a specific diet. Most of the patient population was female, with the mean age at which fibromyalgia was diagnosed being 48.12 years. Improvement in symptoms was reported by the patient's adhering to a gluten-free diet, raw vegan diet, tryptophan- and magnesium-enriched Mediterranean diet, aspartame- and msg- elimination diet, and specifically a Khorasan wheat diet. Risk of bias assessment noted that 6 studies had a low risk of bias (5 clinical trials and 1 case series), four studies had a moderate risk of bias, and 3 had a high risk of bias. In many of the studies, the allocation of treatment (diets) was not adequately concealed, and the researchers did not rule out any potential impact from a concurrent intervention or an unintended exposure that might have biased the results. On the other hand, there was a low risk of attrition bias in all the trials; all were conducted with an intention-to-treat, and the inclusion/exclusion criteria, exposures/interventions, and primary outcomes were valid, reliable, and implemented consistently across all study participants. Concluding statement: Patients with fibromyalgia who followed specialized diets experienced a variable degree of improvement in their widespread body pain. Improvement was also seen in stiffness, fatigue, moods, sleeping patterns, and gastrointestinal symptoms. Additionally, the majority of the patients also reported improvement in overall quality of life.

Keywords: fibromyalgia, specialized diet, vegan, gluten free, Mediterranean, systematic review

Procedia PDF Downloads 53
236 Electromagnetic-Mechanical Stimulation on PC12 for Enhancement of Nerve Axonal Extension

Authors: E. Nakamachi, K. Matsumoto, K. Yamamoto, Y. Morita, H. Sakamoto

Abstract:

In recently, electromagnetic and mechanical stimulations have been recognized as the effective extracellular environment stimulation technique to enhance the defected peripheral nerve tissue regeneration. In this study, we developed a new hybrid bioreactor by adopting 50 Hz uniform alternative current (AC) magnetic stimulation and 4% strain mechanical stimulation. The guide tube for nerve regeneration is mesh structured tube made of biodegradable polymer, such as polylatic acid (PLA). However, when neural damage is large, there is a possibility that peripheral nerve undergoes necrosis. So it is quite important to accelerate the nerve tissue regeneration by achieving enhancement of nerve axonal extension rate. Therefore, we try to design and fabricate the system that can simultaneously load the uniform AC magnetic field stimulation and the stretch stimulation to cells for enhancement of nerve axonal extension. Next, we evaluated systems performance and the effectiveness of each stimulation for rat adrenal pheochromocytoma cells (PC12). First, we designed and fabricated the uniform AC magnetic field system and the stretch stimulation system. For the AC magnetic stimulation system, we focused on the use of pole piece structure to carry out in-situ microscopic observation. We designed an optimum pole piece structure using the magnetic field finite element analyses and the response surface methodology. We fabricated the uniform AC magnetic field stimulation system as a bio-reactor by adopting analytically determined design specifications. We measured magnetic flux density that is generated by the uniform AC magnetic field stimulation system. We confirmed that measurement values show good agreement with analytical results, where the uniform magnetic field was observed. Second, we fabricated the cyclic stretch stimulation device under the conditions of particular strains, where the chamber was made of polyoxymethylene (POM). We measured strains in the PC12 cell culture region to confirm the uniform strain. We found slightly different values from the target strain. Finally, we concluded that these differences were allowable in this mechanical stimulation system. We evaluated the effectiveness of each stimulation to enhance the nerve axonal extension using PC12. We confirmed that the average axonal extension length of PC12 under the uniform AC magnetic stimulation was increased by 16 % at 96 h in our bio-reactor. We could not confirm that the axonal extension enhancement under the stretch stimulation condition, where we found the exfoliating of cells. Further, the hybrid stimulation enhanced the axonal extension. Because the magnetic stimulation inhibits the exfoliating of cells. Finally, we concluded that the enhancement of PC12 axonal extension is due to the magnetic stimulation rather than the mechanical stimulation. Finally, we confirmed that the effectiveness of the uniform AC magnetic field stimulation for the nerve axonal extension using PC12 cells.

Keywords: nerve cell PC12, axonal extension, nerve regeneration, electromagnetic-mechanical stimulation, bioreactor

Procedia PDF Downloads 236
235 Evaluating Daylight Performance in an Office Environment in Malaysia, Using Venetian Blind System: Case Study

Authors: Fatemeh Deldarabdolmaleki, Mohamad Fakri Zaky Bin Ja'afar

Abstract:

Having a daylit space together with view results in a pleasant and productive environment for office employees. A daylit space is a space which utilizes daylight as a basic source of illumination to fulfill user’s visual demands and minimizes the electric energy consumption. Malaysian weather is hot and humid all over the year because of its location in the equatorial belt. however, because most of the commercial buildings in Malaysia are air-conditioned, huge glass windows are normally installed in order to keep the physical and visual relation between inside and outside. As a result of climatic situation and mentioned new trend, an ordinary office has huge heat gain, glare, and discomfort for occupants. Balancing occupant’s comfort and energy conservation in a tropical climate is a real challenge. This study concentrates on evaluating a venetian blind system using per pixel analyzing tools based on the suggested cut-out metrics by the literature. Workplace area in a private office room has been selected as a case study. Eight-day measurement experiment was conducted to investigate the effect of different venetian blind angles in an office area under daylight conditions in Serdang, Malaysia. The study goal was to explore daylight comfort of a commercially available venetian blind system, its’ daylight sufficiency and excess (8:00 AM to 5 PM) as well as Glare examination. Recently developed software, analyzing High Dynamic Range Images (HDRI captured by CCD camera), such as radiance based Evalglare and hdrscope help to investigate luminance-based metrics. The main key factors are illuminance and luminance levels, mean and maximum luminance, daylight glare probability (DGP) and luminance ratio of the selected mask regions. The findings show that in most cases, morning session needs artificial lighting in order to achieve daylight comfort. However, in some conditions (e.g. 10° and 40° slat angles) in the second half of day the workplane illuminance level exceeds the maximum of 2000 lx. Generally, a rising trend is discovered toward mean window luminance and the most unpleasant cases occur after 2 P.M. Considering the luminance criteria rating, the uncomfortable conditions occur in the afternoon session. Surprisingly in no blind condition, extreme case of window/task ratio is not common. Studying the daylight glare probability, there is not any DGP value higher than 0.35 in this experiment.

Keywords: daylighting, energy simulation, office environment, Venetian blind

Procedia PDF Downloads 232
234 The Challenge of Assessing Social AI Threats

Authors: Kitty Kioskli, Theofanis Fotis, Nineta Polemi

Abstract:

The European Union (EU) directive Artificial Intelligence (AI) Act in Article 9 requires that risk management of AI systems includes both technical and human oversight, while according to NIST_AI_RFM (Appendix C) and ENISA AI Framework recommendations, claim that further research is needed to understand the current limitations of social threats and human-AI interaction. AI threats within social contexts significantly affect the security and trustworthiness of the AI systems; they are interrelated and trigger technical threats as well. For example, lack of explainability (e.g. the complexity of models can be challenging for stakeholders to grasp) leads to misunderstandings, biases, and erroneous decisions. Which in turn impact the privacy, security, accountability of the AI systems. Based on the NIST four fundamental criteria for explainability it can also classify the explainability threats into four (4) sub-categories: a) Lack of supporting evidence: AI systems must provide supporting evidence or reasons for all their outputs. b) Lack of Understandability: Explanations offered by systems should be comprehensible to individual users. c) Lack of Accuracy: The provided explanation should accurately represent the system's process of generating outputs. d) Out of scope: The system should only function within its designated conditions or when it possesses sufficient confidence in its outputs. Biases may also stem from historical data reflecting undesired behaviors. When present in the data, biases can permeate the models trained on them, thereby influencing the security and trustworthiness of the of AI systems. Social related AI threats are recognized by various initiatives (e.g., EU Ethics Guidelines for Trustworthy AI), standards (e.g. ISO/IEC TR 24368:2022 on AI ethical concerns, ISO/IEC AWI 42105 on guidance for human oversight of AI systems) and EU legislation (e.g. the General Data Protection Regulation 2016/679, the NIS 2 Directive 2022/2555, the Directive on the Resilience of Critical Entities 2022/2557, the EU AI Act, the Cyber Resilience Act). Measuring social threats, estimating the risks to AI systems associated to these threats and mitigating them is a research challenge. In this paper it will present the efforts of two European Commission Projects (FAITH and THEMIS) from the HorizonEurope programme that analyse the social threats by building cyber-social exercises in order to study human behaviour, traits, cognitive ability, personality, attitudes, interests, and other socio-technical profile characteristics. The research in these projects also include the development of measurements and scales (psychometrics) for human-related vulnerabilities that can be used in estimating more realistically the vulnerability severity, enhancing the CVSS4.0 measurement.

Keywords: social threats, artificial Intelligence, mitigation, social experiment

Procedia PDF Downloads 33
233 Railway Ballast Volumes Automated Estimation Based on LiDAR Data

Authors: Bahar Salavati Vie Le Sage, Ismaïl Ben Hariz, Flavien Viguier, Sirine Noura Kahil, Audrey Jacquin, Maxime Convert

Abstract:

The ballast layer plays a key role in railroad maintenance and the geometry of the track structure. Ballast also holds the track in place as the trains roll over it. Track ballast is packed between the sleepers and on the sides of railway tracks. An imbalance in ballast volume on the tracks can lead to safety issues as well as a quick degradation of the overall quality of the railway segment. If there is a lack of ballast in the track bed during the summer, there is a risk that the rails will expand and buckle slightly due to the high temperatures. Furthermore, the knowledge of the ballast quantities that will be excavated during renewal works is important for efficient ballast management. The volume of excavated ballast per meter of track can be calculated based on excavation depth, excavation width, volume of track skeleton (sleeper and rail) and sleeper spacing. Since 2012, SNCF has been collecting 3D points cloud data covering its entire railway network by using 3D laser scanning technology (LiDAR). This vast amount of data represents a modelization of the entire railway infrastructure, allowing to conduct various simulations for maintenance purposes. This paper aims to present an automated method for ballast volume estimation based on the processing of LiDAR data. The estimation of abnormal volumes in ballast on the tracks is performed by analyzing the cross-section of the track. Further, since the amount of ballast required varies depending on the track configuration, the knowledge of the ballast profile is required. Prior to track rehabilitation, excess ballast is often present in the ballast shoulders. Based on 3D laser scans, a Digital Terrain Model (DTM) was generated and automatic extraction of the ballast profiles from this data is carried out. The surplus in ballast is then estimated by performing a comparison between this ballast profile obtained empirically, and a geometric modelization of the theoretical ballast profile thresholds as dictated by maintenance standards. Ideally, this excess should be removed prior to renewal works and recycled to optimize the output of the ballast renewal machine. Based on these parameters, an application has been developed to allow the automatic measurement of ballast profiles. We evaluated the method on a 108 kilometers segment of railroad LiDAR scans, and the results show that the proposed algorithm detects ballast surplus that amounts to values close to the total quantities of spoil ballast excavated.

Keywords: ballast, railroad, LiDAR , cloud point, track ballast, 3D point

Procedia PDF Downloads 76
232 Analysis of Thermal Comfort in Educational Buildings Using Computer Simulation: A Case Study in Federal University of Parana, Brazil

Authors: Ana Julia C. Kfouri

Abstract:

A prerequisite of any building design is to provide security to the users, taking the climate and its physical and physical-geometrical variables into account. It is also important to highlight the relevance of the right material elements, which arise between the person and the agent, and must provide improved thermal comfort conditions and low environmental impact. Furthermore, technology is constantly advancing, as well as computational simulations for projects, and they should be used to develop sustainable building and to provide higher quality of life for its users. In relation to comfort, the more satisfied the building users are, the better their intellectual performance will be. Based on that, the study of thermal comfort in educational buildings is of relative relevance, since the thermal characteristics in these environments are of vital importance to all users. Moreover, educational buildings are large constructions and when they are poorly planned and executed they have negative impacts to the surrounding environment, as well as to the user satisfaction, throughout its whole life cycle. In this line of thought, to evaluate university classroom conditions, it was accomplished a detailed case study on the thermal comfort situation at Federal University of Parana (UFPR). The main goal of the study is to perform a thermal analysis in three classrooms at UFPR, in order to address the subjective and physical variables that influence thermal comfort inside the classroom. For the assessment of the subjective components, a questionnaire was applied in order to evaluate the reference for the local thermal conditions. Regarding the physical variables, it was carried out on-site measurements, which consist of performing measurements of air temperature and air humidity, both inside and outside the building, as well as meteorological variables, such as wind speed and direction, solar radiation and rainfall, collected from a weather station. Then, a computer simulation based on results from the EnergyPlus software to reproduce air temperature and air humidity values of the three classrooms studied was conducted. The EnergyPlus outputs were analyzed and compared with the on-site measurement results to be possible to come out with a conclusion related to the local thermal conditions. The methodological approach included in the study allowed a distinct perspective in an educational building to better understand the classroom thermal performance, as well as the reason of such behavior. Finally, the study induces a reflection about the importance of thermal comfort for educational buildings and propose thermal alternatives for future projects, as well as a discussion about the significant impact of using computer simulation on engineering solutions, in order to improve the thermal performance of UFPR’s buildings.

Keywords: computer simulation, educational buildings, EnergyPlus, humidity, temperature, thermal comfort

Procedia PDF Downloads 365
231 Nanocarriers Made of Amino Acid Based Biodegradable Polymers: Poly(Ester Amide) and Related Cationic and PEGylating Polymers

Authors: Sophio Kobauri, Temur Kantaria, Nina Kulikova, David Tugushi, Ramaz Katsarava

Abstract:

Polymeric nanoparticles-based drug delivery systems and therapeutics have a great potential in the treatment of a numerous diseases, due to they are characterizing the flexible properties which is giving possibility to modify their structures with a complex definition over their structures, compositions and properties. Important characteristics of the polymeric nanoparticles (PNPs) used as drug carriers are high particle’s stability, high carrier capacity, feasibility of encapsulation of both hydrophilic and hydrophobic drugs, and feasibility of variable routes of administration, including oral application and inhalation; NPs are especially effective for intracellular drug delivery since they penetrate into the cells’ interior though endocytosis. A variety of PNPs based drug delivery systems including charged and neutral, degradable and non-degradable polymers of both natural and synthetic origin have been developed. Among these huge varieties the biodegradable PNPs which can be cleared from the body after the fulfillment of their function could be considered as one of the most promising. For intracellular uptake it is highly desirable to have positively charged PNPs since they can penetrate deep into cell membranes. For long-lasting circulation of PNPs in the body it is important they have so called “stealth coatings” to protect them from the attack of immune system of the organism. One of the effective ways to render the PNPs “invisible” for immune system is their PEGylation which represent the process of pretreatment of polyethylene glycol (PEG) on the surface of PNPs. The present work deals with constructing PNPs from amino acid based biodegradable polymers – regular poly(ester amide) (PEA) composed of sebacic acid, leucine and 1,6-hexandiol (labeled as 8L6), cationic PEA composed of sebacic acid, arginine and 1,6-hexandiol (labeled as 8R6), and comb-like co-PEA composed of sebacic acid, malic acid, leucine and 1,6-hexandiol (labeled as PEG-PEA). The PNPs were fabricated using the polymer deposition/solvent displacement (nanoprecipitation) method. The regular PEA 8L6 form stable negatively charged (zeta-potential within 2-12 mV) PNPs of desired size (within 150-200 nm) in the presence of various surfactants (Tween 20, Tween 80, Brij 010, etc.). Blending the PEAs 8L6 and 8R6 gave the 130-140 nm sized positively charged PNPs having zeta-potential within +20 ÷ +28 mV depending 8L6/8R6 ratio. The PEGylating PEA PEG-PEA was synthesized by interaction of epoxy-co-PEA [8L6]0,5-[tES-L6]0,5 with mPEG-amine-2000 The stable and positively charged PNPs were fabricated using pure PEG-PEA as a surfactant. A firm anchoring of the PEG-PEA with 8L6/8R6 based PNPs (owing to a high afinity of the backbones of all three PEAs) provided good stabilization of the NPs. In vitro biocompatibility study of the new PNPs with four different stable cell lines: A549 (human), U-937 (human), RAW264.7 (murine), Hepa 1-6 (murine) showed they are biocompatible. Considering high stability and cell compatibility of the elaborated PNPs one can conclude that they are promising for subsequent therapeutic applications. This work was supported by the joint grant from the Science and Technology Center in Ukraine and Shota Rustaveli National Science Foundation of Georgia #6298 “New biodegradable cationic polymers composed of arginine and spermine-versatile biomaterials for various biomedical applications”.

Keywords: biodegradable poly(ester amide)s, cationic poly(ester amide), pegylating poly(ester amide), nanoparticles

Procedia PDF Downloads 96
230 Cross-Comparison between Land Surface Temperature from Polar and Geostationary Satellite over Heterogenous Landscape: A Case Study in Hong Kong

Authors: Ibrahim A. Adeniran, Rui F. Zhu, Man S. Wong

Abstract:

Owing to the insufficiency in the spatial representativeness and continuity of in situ temperature measurements from weather stations (WS), the use of temperature measurement from WS for large-range diurnal analysis in heterogenous landscapes has been limited. This has made the accurate estimation of land surface temperature (LST) from remotely sensed data more crucial. Moreover, the study of dynamic interaction between the atmosphere and the physical surface of the Earth could be enhanced at both annual and diurnal scales by using optimal LST data derived from satellite sensors. The tradeoff between the spatial and temporal resolution of LSTs from satellite’s thermal infrared sensors (TIRS) has, however, been a major challenge, especially when high spatiotemporal LST data are recommended. It is well-known from existing literature that polar satellites have the advantage of high spatial resolution, while geostationary satellites have a high temporal resolution. Hence, this study is aimed at designing a framework for the cross-comparison of LST data from polar and geostationary satellites in a heterogeneous landscape. This could help to understand the relationship between the LST estimates from the two satellites and, consequently, their integration in diurnal LST analysis. Landsat-8 satellite data will be used as the representative of the polar satellite due to the availability of its long-term series, while the Himawari-8 satellite will be used as the data source for the geostationary satellite because of its improved TIRS. For the study area, Hong Kong Special Administrative Region (HK SAR) will be selected; this is due to the heterogeneity in the landscape of the region. LST data will be retrieved from both satellites using the Split window algorithm (SWA), and the resulting data will be validated by comparing satellite-derived LST data with temperature data from automatic WS in HK SAR. The LST data from the satellite data will then be separated based on the land use classification in HK SAR using the Global Land Cover by National Mapping Organization version3 (GLCNMO 2013) data. The relationship between LST data from Landsat-8 and Himawari-8 will then be investigated based on the land-use class and over different seasons of the year in order to account for seasonal variation in their relationship. The resulting relationship will be spatially and statistically analyzed and graphically visualized for detailed interpretation. Findings from this study will reveal the relationship between the two satellite data based on the land use classification within the study area and the seasons of the year. While the information provided by this study will help in the optimal combination of LST data from Polar (Landsat-8) and geostationary (Himawari-8) satellites, it will also serve as a roadmap in the annual and diurnal urban heat (UHI) analysis in Hong Kong SAR.

Keywords: automatic weather station, Himawari-8, Landsat-8, land surface temperature, land use classification, split window algorithm, urban heat island

Procedia PDF Downloads 47
229 The Procedural Sedation Checklist Manifesto, Emergency Department, Jersey General Hospital

Authors: Jerome Dalphinis, Vishal Patel

Abstract:

The Bailiwick of Jersey is an island British crown dependency situated off the coast of France. Jersey General Hospital’s emergency department sees approximately 40,000 patients a year. It’s outside the NHS, with secondary care being free at the point of care. Sedation is a continuum which extends from a normal conscious level to being fully unresponsive. Procedural sedation produces a minimally depressed level of consciousness in which the patient retains the ability to maintain an airway, and they respond appropriately to physical stimulation. The goals of it are to improve patient comfort and tolerance of the procedure and alleviate associated anxiety. Indications can be stratified by acuity, emergency (cardioversion for life-threatening dysrhythmia), and urgency (joint reduction). In the emergency department, this is most often achieved using a combination of opioids and benzodiazepines. Some departments also use ketamine to produce dissociative sedation, a cataleptic state of profound analgesia and amnesia. The response to pharmacological agents is highly individual, and the drugs used occasionally have unpredictable pharmacokinetics and pharmacodynamics, which can always result in progression between levels of sedation irrespective of the intention. Therefore, practitioners must be able to ‘rescue’ patients from deeper sedation. These practitioners need to be senior clinicians with advanced airway skills (AAS) training. It can lead to adverse effects such as dangerous hypoxia and unintended loss of consciousness if incorrectly undertaken; studies by the National Confidential Enquiry into Patient Outcome and Death (NCEPOD) have reported avoidable deaths. The Royal College of Emergency Medicine, UK (RCEM) released an updated ‘Safe Sedation of Adults in the Emergency Department’ guidance in 2017 detailing a series of standards for staff competencies, and the required environment and equipment, which are required for each target sedation depth. The emergency department in Jersey undertook audit research in 2018 to assess their current practice. It showed gaps in clinical competency, the need for uniform care, and improved documentation. This spurred the development of a checklist incorporating the above RCEM standards, including contraindication for procedural sedation and difficult airway assessment. This was approved following discussion with the relevant heads of departments and the patient safety directorates. Following this, a second audit research was carried out in 2019 with 17 completed checklists (11 relocation of joints, 6 cardioversions). Data was obtained from looking at the controlled resuscitation drugs book containing documented use of ketamine, alfentanil, and fentanyl. TrakCare, which is the patient electronic record system, was then referenced to obtain further information. The results showed dramatic improvement compared to 2018, and they have been subdivided into six categories; pre-procedure assessment recording of significant medical history and ASA grade (2 fold increase), informed consent (100% documentation), pre-oxygenation (88%), staff (90% were AAS practitioners) and monitoring (92% use of non-invasive blood pressure, pulse oximetry, capnography, and cardiac rhythm monitoring) during procedure, and discharge instructions including the documented return of normal vitals and consciousness (82%). This procedural sedation checklist is a safe intervention that identifies pertinent information about the patient and provides a standardised checklist for the delivery of gold standard of care.

Keywords: advanced airway skills, checklist, procedural sedation, resuscitation

Procedia PDF Downloads 91
228 Diurnal Circle of Rainfall and Convective Properties over West and Central Africa

Authors: Balogun R. Ayodeji, Adefisan E. Adesanya, Adeyewa Z. Debo, E. C. Okogbue

Abstract:

The need to investigate diurnal weather circles in West Africa is coined in the fact that complex interactions often results from diurnal weather patterns. This study investigates diurnal circles of wind, rainfall and convective properties using six (6) hour interval data from the ERA-Interim and the Tropical Rainfall Measurement Mission (TRMM). The seven distinct zones, used in this work and classified as rainforest (west-coast, dry, Nigeria-Cameroon), Savannah (Nigeria, and Central Africa and South Sudan (CASS)), Sudano-Sahel, and Sahel, were clearly indicated by the rainfall pattern in each zones. Results showed that the land‐ocean warming contrast was more strongly sensitive to seasonal cycle and has been very weak during March-May (MAM) but clearly spelt out during June-September (JJAS). Dipoles of wind convergence/divergence and wet/dry precipitation, between CASS and Nigeria Savannah zones, were identified in morning and evening hours of MAM, whereas distinct night and day anomaly, in the same location of CASS, were found to be consistent during the JJAS season. Diurnal variation of convective properties showed that stratiform precipitation, due to the extremely low occurrence of flashcount climatology, was dominant during morning hours for both MAM and JJAS than other periods of the day. On the other hand, diurnal variation of the system sizes showed that small system sizes were most dominant during the day time periods for both MAM and JJAS, whereas larger system sizes were frequent during the evening, night, and morning hours. The locations of flashcount and system sizes agreed with earlier results that morning and day-time hours were dominated by stratiform precipitation and small system sizes respectively. Most results clearly showed that the eastern locations of Sudano and Sahel were consistently dry because rainfall and precipitation features were predominantly few. System sizes greater than or equal to 800 km² were found in the western axis of the Sudano and Sahel zones, whereas the eastern axis, particularly in the Sahel zone, had minimal occurrences of small/large system sizes. From the results of locations of extreme systems, flashcount greater than 275 in one single system was never observed during the morning (6Z) diurnal, whereas, the evening (18Z) diurnal had the most frequent cases (at least 8) of flashcount exceeding 275 in one single system. Results presented had shown the importance of diurnal variation in understanding precipitation, flashcount, system sizes patterns at diurnal scales, and understanding land-ocean contrast, precipitation, and wind field anomaly at diurnal scales.

Keywords: convective properties, diurnal circle, flashcount, system sizes

Procedia PDF Downloads 103
227 The Impact of Information and Communications Technology (ICT)-Enabled Service Adaptation on Quality of Life: Insights from Taiwan

Authors: Chiahsu Yang, Peiling Wu, Ted Ho

Abstract:

From emphasizing economic development to stressing public happiness, the international community mainly hopes to be able to understand whether the quality of life for the public is becoming better. The Better Life Index (BLI) constructed by OECD uses living conditions and quality of life as starting points to cover 11 areas of life and to convey the state of the general public’s well-being. In light of the BLI framework, the Directorate General of Budget, Accounting and Statistics (DGBAS) of the Executive Yuan instituted the Gross National Happiness Index to understand the needs of the general public and to measure the progress of the aforementioned conditions in residents across the island. Whereas living conditions consist of income and wealth, jobs and earnings, and housing conditions, health status, work and life balance, education and skills, social connections, civic engagement and governance, environmental quality, personal security. The ICT area consists of health care, living environment, ICT-enabled communication, transportation, government, education, pleasure, purchasing, job & employment. In the wake of further science and technology development, rapid formation of information societies, and closer integration between lifestyles and information societies, the public’s well-being within information societies has indeed become a noteworthy topic. the Board of Science and Technology of the Executive Yuan use the OECD’s BLI as a reference in the establishment of the Taiwan-specific ICT-Enabled Better Life Index. Using this index, the government plans to examine whether the public’s quality of life is improving as well as measure the public’s satisfaction with current digital quality of life. This understanding will enable the government to gauge the degree of influence and impact that each dimension of digital services has on digital life happiness while also serving as an important reference for promoting digital service development. The content of the ICT Enabled Better Life Index. Information and communications technology (ICT) has been affecting people’s living styles, and further impact people’s quality of life (QoL). Even studies have shown that ICT access and usage have both positive and negative impact on life satisfaction and well-beings, many governments continue to invest in e-government programs to initiate their path to information society. This research is the few attempts to link the e-government benchmark to the subjective well-being perception, and further address the gap between user’s perception and existing hard data assessment, then propose a model to trace measurement results back to the original public policy in order for policy makers to justify their future proposals.

Keywords: information and communications technology, quality of life, satisfaction, well-being

Procedia PDF Downloads 327
226 Marketization of Higher Education in the UK and Its Impacts on Teaching Practitioners

Authors: Hossein Rezaie

Abstract:

Academic institutions, esp. universities, have been known as cradles of learning and teaching great thinkers while creating the type of knowledge that is supposed to be bereft of utilitarian motives. Nonetheless, it seems that such intellectual centers have entered into a competition with each other for attracting the attention of potential clients. The traditional values of (higher) education such as nurturing criticality and fostering intellectuality in students have been replaced with strategic planning, quality assurance, performance assessment, and academic audits. Not being immune from the whims and wishes of marketization, the system of higher education in the UK has been recalibrated by policy makers to address the demand and supply of student education, academic research and other university activities on the basis of monetary factors. As an immediate example in this vein, the Russell Group in the UK, which is comprised of 24 leading UK research universities, has explicitly expressed it policy on its official website as follows: ‘Russell Group universities are global businesses competing for staff, students and funding with the best in the world’. Furthermore, certain attempts have been made to corporatize the system of HE which have been manifested in remodeling of university governing bodies on corporate lines and developing measurement scales for indicating the performance of teaching practitioners. Nevertheless, it seems that such structural changes in policies toward the system of HE have bearing on the practices of practitioners and educators as well as the identity of students who are the customers of educational services. The effects of marketization have been examined mainly in terms of students’ perceptions and motivation, institutional policies and university management. However, the teaching practitioner side seems to be an under-studied area with regard to any changes in its expectations, satisfaction and perception of professional identity in the aftermath of introducing market-wise values into HE of the UK. As a result, this research aims to investigate the possible outcomes of market-driven values on the practitioner side of HE in the UK and finally seeks to address the following research questions: 1-How is the change in the mission of HE in the UK reflected in institutional documents? 1-A- How is the change of mission represented in job adverts? 1-B- How is the change of mission represented in university prospectuses? 2-How are teaching practitioners represented regarding their roles and obligations in the prospectuses and job ads published by UK HE institutions? In order to address these questions, the researcher will analyze 30 prospectuses and job ads published by Russel Group universities by taking Critical Discourse Analysis as his point of departure and the analytical methods of genre analysis and Systemic Functional Linguistics to probe into the generic features and representation of participants, in this case teaching practitioners, in the selected corpus.

Keywords: higher education, job advertisements, marketization of higher education, prospectuses

Procedia PDF Downloads 222
225 Improving Data Completeness and Timely Reporting: A Joint Collaborative Effort between Partners in Health and Ministry of Health in Remote Areas, Neno District, Malawi

Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Moses Banda Aron, Julia Higgins, Manuel Mulwafu, Kondwani Mpinga, Mwayi Chunga, Grace Momba, Enock Ndarama, Dickson Sumphi, Atupere Phiri, Fabien Munyaneza

Abstract:

Background: Data is key to supporting health service delivery as stakeholders, including NGOs rely on it for effective service delivery, decision-making, and system strengthening. Several studies generated debate on data quality from national health management information systems (HMIS) in sub-Saharan Africa. This limits the utilization of data in resource-limited settings, which already struggle to meet standards set by the World Health Organization (WHO). We aimed to evaluate data quality improvement of Neno district HMIS over a 4-year period (2018 – 2021) following quarterly data reviews introduced in January 2020 by the district health management team and Partners In Health. Methods: Exploratory Mixed Research was used to examine report rates, followed by in-depth interviews using Key Informant Interviews (KIIs) and Focus Group Discussions (FGDs). We used the WHO module desk review to assess the quality of HMIS data in the Neno district captured from 2018 to 2021. The metrics assessed included the completeness and timeliness of 34 reports. Completeness was measured as a percentage of non-missing reports. Timeliness was measured as the span between data inputs and expected outputs meeting needs. We computed T-Test and recorded P-values, summaries, and percentage changes using R and Excel 2016. We analyzed demographics for key informant interviews in Power BI. We developed themes from 7 FGDs and 11 KIIs using Dedoose software, from which we picked perceptions of healthcare workers, interventions implemented, and improvement suggestions. The study was reviewed and approved by Malawi National Health Science Research Committee (IRB: 22/02/2866). Results: Overall, the average reporting completeness rate was 83.4% (before) and 98.1% (after), while timeliness was 68.1% and 76.4 respectively. Completeness of reports increased over time: 2018, 78.8%; 2019, 88%; 2020, 96.3% and 2021, 99.9% (p< 0.004). The trend for timeliness has been declining except in 2021, where it improved: 2018, 68.4%; 2019, 68.3%; 2020, 67.1% and 2021, 81% (p< 0.279). Comparing 2021 reporting rates to the mean of three preceding years, both completeness increased from 88% to 99% (in 2021), while timeliness increased from 68% to 81%. Sixty-five percent of reports have maintained meeting a national standard of 90%+ in completeness while only 24% in timeliness. Thirty-two percent of reports met the national standard. Only 9% improved on both completeness and timeliness, and these are; cervical cancer, nutrition care support and treatment, and youth-friendly health services reports. 50% of reports did not improve to standard in timeliness, and only one did not in completeness. On the other hand, factors associated with improvement included improved communications and reminders using internal communication, data quality assessments, checks, and reviews. Decentralizing data entry at the facility level was suggested to improve timeliness. Conclusion: Findings suggest that data quality in HMIS for the district has improved following collaborative efforts. We recommend maintaining such initiatives to identify remaining quality gaps and that results be shared publicly to support increased use of data. These results can inform Ministry of Health and its partners on some interventions and advise initiatives for improving its quality.

Keywords: data quality, data utilization, HMIS, collaboration, completeness, timeliness, decision-making

Procedia PDF Downloads 56
224 Exposure to Radon on Air in Tourist Caves in Bulgaria

Authors: Bistra Kunovska, Kremena Ivanova, Jana Djounova, Desislava Djunakova, Zdenka Stojanovska

Abstract:

The carcinogenic effects of radon as a radioactive noble gas have been studied and show a strong correlation between radon exposure and lung cancer occurrence, even in the case of low radon levels. The major part of the natural radiation dose in humans is received by inhaling radon and its progenies, which originates from the decay chain of U-238. Indoor radon poses a substantial threat to human health when build-up occurs in confined spaces such as homes, mines and caves and the risk increases with the duration of radon exposure and is proportional to both the radon concentration and the time of exposure. Tourist caves are a case of special environmental conditions that may be affected by high radon concentration. Tourist caves are a recognized danger in terms of radon exposure to cave workers (guides, employees working in shops built above the cave entrances, etc.), but due to the sensitive nature of the cave environment, high concentrations cannot be easily removed. Forced ventilation of the air in the caves is considered unthinkable due to the possible harmful effects on the microclimate, flora and fauna. The risks to human health posed by exposure to elevated radon levels in caves are not well documented. Various studies around the world often detail very high concentrations of radon in caves and exposure of employees but without a follow-up assessment of the overall impact on human health. This study was developed in the implementation of a national project to assess the potential health effects caused by exposure to elevated levels of radon in buildings with public access under the National Science Fund of Bulgaria, in the framework of grant No КП-06-Н23/1/07.12.2018. The purpose of the work is to assess the radon level in Bulgarian caves and the exposure of the visitors and workers. The number of caves (sampling size) was calculated for simple random selection from total available caves 65 (sampling population) are 13 caves with confidence level 95 % and confidence interval (margin of error) approximately 25 %. A measurement of the radon concentration in air at specific locations in caves was done by using CR-39 type nuclear track-etch detectors that were placed by the participants in the research team. Despite the fact that all of the caves were formed in karst rocks, the radon levels were rather different from each other (97–7575 Bq/m3). An assessment of the influence of the orientation of the caves in the earth's surface (horizontal, inclined, vertical) on the radon concentration was performed. Evaluation of health hazards and radon risk exposure causing by inhaling the radon and its daughter products in each surveyed caves was done. Reducing the time spent in the cave has been recommended in order to decrease the exposure of workers.

Keywords: tourist caves, radon concentration, exposure, Bulgaria

Procedia PDF Downloads 165
223 The Validation and Reliability of the Arabic Effort-Reward Imbalance Model Questionnaire: A Cross-Sectional Study among University Students in Jordan

Authors: Mahmoud M. AbuAlSamen, Tamam El-Elimat

Abstract:

Amid the economic crisis in Jordan, the Jordanian government has opted for a knowledge economy where education is promoted as a mean for economic development. University education usually comes at the expense of study-related stress that may adversely impact the health of students. Since stress is a latent variable that is difficult to measure, a valid tool should be used in doing so. The effort-reward imbalance (ERI) is a model used as a measurement tool for occupational stress. The model was built on the notion of reciprocity, which relates ‘effort’ to ‘reward’ through the mediating ‘over-commitment’. Reciprocity assumes equilibrium between both effort and reward, where ‘high’ effort is adequately compensated with ‘high’ reward. When this equilibrium is violated (i.e., high effort with low reward), this may elicit negative emotions and stress, which have been correlated to adverse health conditions. The theory of ERI was established in many different parts of the world, and associations with chronic diseases and the health of workers were explored at length. While much of the effort-reward imbalance was investigated in work conditions, there has been a growing interest in understanding the validity of the ERI model when applied to other social settings such as schools and universities. The ERI questionnaire was developed in Arabic recently to measure ERI among high school teachers. However, little information is available on the validity of the ERI questionnaire in university students. A cross-sectional study was conducted on 833 students in Jordan to measure the validity and reliability of the ERI questionnaire in Arabic among university students. Reliability, as measured by Cronbach’s alpha of the effort, reward, and overcommitment scales, was 0.73, 0.76, and 0.69, respectively, suggesting satisfactory reliability. The factorial structure was explored using principal axis factoring. The results fitted a five-solution model where both the effort and overcommitment were uni-dimensional while the reward scale was three-dimensional with its factors, namely being ‘support’, ‘esteem’, and ‘security’. The solution explained 56% of the variance in the data. The established ERI theory was replicated with excellent validity in this study. The effort-reward ratio in university students was 1.19, which suggests a slight degree of failed reciprocity. The study also investigated the association of effort, reward, overcommitment, and ERI with participants’ demographic factors and self-reported health. ERI was found to be significantly associated with absenteeism (p < 0.0001), past history of failed courses (p=0.03), and poor academic performance (p < 0.001). Moreover, ERI was found to be associated with poor self-reported health among university students (p=0.01). In conclusion, the Arabic ERI questionnaire is reliable and valid for use in measuring effort-reward imbalance in university students in Jordan. The results of this research are important in informing higher education policy in Jordan.

Keywords: effort-reward imbalance, factor analysis, validity, self-reported health

Procedia PDF Downloads 94
222 Validation of an Acuity Measurement Tool for Maternity Services

Authors: Cherrie Lowe

Abstract:

The TrendCare Patient Dependency System is currently utilized by a large number of Maternity Services across Australia, New Zealand and Singapore. In 2012, 2013, and 2014 validation studies were initiated in all three countries to validate the acuity tools used for Women in Labour, and Postnatal Mothers and Babies. This paper will present the findings of the validation study. Aim: The aim of this study was to; Identify if the care hours provided by the TrendCare Acuity System was an accurate reflection of the care required by Women and Babies. Obtain evidence of changes required to acuity indicators and/or category timings to ensure the TrendCare acuity system remains reliable and valid across a range of Maternity care models in three countries. Method: A non-experimental action research methodology was used across four District Health Boards in New Zealand, two large public Australian Maternity services and a large tertiary Maternity service in Singapore. Standardized data collection forms and timing devices were used to collect Midwife contact times with Women and Babies included in the study. Rejection processes excluded samples where care was not completed/rationed. The variances between actual timed Midwife/Mother/Baby contact and actual Trend Care acuity times were identified and investigated. Results: 87.5% (18) of TrendCare acuity category timings matched the actual timings recorded for Midwifery care. 12.5% (3) of TrendCare night duty categories provided less minutes of care than the actual timings. 100% of Labour Ward TrendCare categories matched actual timings for Midwifery care. The actual times given for assistance to New Zealand independent Midwives in Labour Ward showed a significant deviation to previous studies demonstrating the need for additional time allocations in Trend Care. Conclusion: The results demonstrated the importance of regularly validating the Trend Care category timings with the care hours required, as variances to models of care and length of stay in Maternity units have increased Midwifery workloads on the night shift. The level of assistance provided by the core labour ward staff to the Independent Midwife has increased substantially. Outcomes: As a consequence of this study changes were made to the night duty TrendCare Maternity categories, additional acuity indicators developed and times for assisting independent Midwives increased. The updated TrendCare version was delivered to Maternity services in 2014.

Keywords: maternity, acuity, research, nursing workloads

Procedia PDF Downloads 348
221 Intelligent Indoor Localization Using WLAN Fingerprinting

Authors: Gideon C. Joseph

Abstract:

The ability to localize mobile devices is quite important, as some applications may require location information of these devices to operate or deliver better services to the users. Although there are several ways of acquiring location data of mobile devices, the WLAN fingerprinting approach has been considered in this work. This approach uses the Received Signal Strength Indicator (RSSI) measurement as a function of the position of the mobile device. RSSI is a quantitative technique of describing the radio frequency power carried by a signal. RSSI may be used to determine RF link quality and is very useful in dense traffic scenarios where interference is of major concern, for example, indoor environments. This research aims to design a system that can predict the location of a mobile device, when supplied with the mobile’s RSSIs. The developed system takes as input the RSSIs relating to the mobile device, and outputs parameters that describe the location of the device such as the longitude, latitude, floor, and building. The relationship between the Received Signal Strengths (RSSs) of mobile devices and their corresponding locations is meant to be modelled; hence, subsequent locations of mobile devices can be predicted using the developed model. It is obvious that describing mathematical relationships between the RSSIs measurements and localization parameters is one option to modelling the problem, but the complexity of such an approach is a serious turn-off. In contrast, we propose an intelligent system that can learn the mapping of such RSSIs measurements to the localization parameters to be predicted. The system is capable of upgrading its performance as more experiential knowledge is acquired. The most appealing consideration to using such a system for this task is that complicated mathematical analysis and theoretical frameworks are excluded or not needed; the intelligent system on its own learns the underlying relationship in the supplied data (RSSI levels) that corresponds to the localization parameters. These localization parameters to be predicted are of two different tasks: Longitude and latitude of mobile devices are real values (regression problem), while the floor and building of the mobile devices are of integer values or categorical (classification problem). This research work presents artificial neural network based intelligent systems to model the relationship between the RSSIs predictors and the mobile device localization parameters. The designed systems were trained and validated on the collected WLAN fingerprint database. The trained networks were then tested with another supplied database to obtain the performance of trained systems on achieved Mean Absolute Error (MAE) and error rates for the regression and classification tasks involved therein.

Keywords: indoor localization, WLAN fingerprinting, neural networks, classification, regression

Procedia PDF Downloads 315
220 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.

Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement

Procedia PDF Downloads 97
219 Specific Earthquake Ground Motion Levels That Would Affect Medium-To-High Rise Buildings

Authors: Rhommel Grutas, Ishmael Narag, Harley Lacbawan

Abstract:

Construction of high-rise buildings is a means to address the increasing population in Metro Manila, Philippines. The existence of the Valley Fault System within the metropolis and other nearby active faults poses threats to a densely populated city. The distant, shallow and large magnitude earthquakes have the potential to generate slow and long-period vibrations that would affect medium-to-high rise buildings. Heavy damage and building collapse are consequences of prolonged shaking of the structure. If the ground and the building have almost the same period, there would be a resonance effect which would cause the prolonged shaking of the building. Microzoning the long-period ground response would aid in the seismic design of medium to high-rise structures. The shear-wave velocity structure of the subsurface is an important parameter in order to evaluate ground response. Borehole drilling is one of the conventional methods of determining shear-wave velocity structure however, it is an expensive approach. As an alternative geophysical exploration, microtremor array measurements can be used to infer the structure of the subsurface. Microtremor array measurement system was used to survey fifty sites around Metro Manila including some municipalities of Rizal and Cavite. Measurements were carried out during the day under good weather conditions. The team was composed of six persons for the deployment and simultaneous recording of the microtremor array sensors. The instruments were laid down on the ground away from sewage systems and leveled using the adjustment legs and bubble level. A total of four sensors were deployed for each site, three at the vertices of an equilateral triangle with one sensor at the centre. The circular arrays were set up with a maximum side length of approximately four kilometers and the shortest side length for the smallest array is approximately at 700 meters. Each recording lasted twenty to sixty minutes. From the recorded data, f-k analysis was applied to obtain phase velocity curves. Inversion technique is applied to construct the shear-wave velocity structure. This project provided a microzonation map of the metropolis and a profile showing the long-period response of the deep sedimentary basin underlying Metro Manila which would be suitable for local administrators in their land use planning and earthquake resistant design of medium to high-rise buildings.

Keywords: earthquake, ground motion, microtremor, seismic microzonation

Procedia PDF Downloads 448
218 First Systematic Review on Aerosol Bound Water: Exploring the Existing Knowledge Domain Using the CiteSpace Software

Authors: Kamila Widziewicz-Rzonca

Abstract:

The presence of PM bound water as an integral chemical compound of suspended aerosol particles (PM) has become one of the hottest issues in recent years. The UN climate summits on climate change (COP24) indicate that PM of anthropogenic origin (released mostly from coal combustion) is directly responsible for climate change. Chemical changes at the particle-liquid (water) interface determine many phenomena occurring in the atmosphere such as visibility, cloud formation or precipitation intensity. Since water-soluble particles such as nitrates, sulfates, or sea salt easily become cloud condensation nuclei, they affect the climate for example by increasing cloud droplet concentration. Aerosol water is a master component of atmospheric aerosols and a medium that enables all aqueous-phase reactions occurring in the atmosphere. Thanks to a thorough bibliometric analysis conducted using CiteSpace Software, it was possible to identify past trends and possible future directions in measuring aerosol-bound water. This work, in fact, doesn’t aim at reviewing the existing literature in the related topic but is an in-depth bibliometric analysis exploring existing gaps and new frontiers in the topic of PM-bound water. To assess the major scientific areas related to PM-bound water and clearly define which among those are the most active topics we checked Web of Science databases from 1996 till 2018. We give an answer to the questions: which authors, countries, institutions and aerosol journals to the greatest degree influenced PM-bound water research? Obtained results indicate that the paper with the greatest citation burst was Tang In and Munklewitz H.R. 'water activities, densities, and refractive indices of aqueous sulfates and sodium nitrate droplets of atmospheric importance', 1994. The largest number of articles in this specific field was published in atmospheric chemistry and physics. An absolute leader in the quantity of publications among all research institutions is the National Aeronautics Space Administration (NASA). Meteorology and atmospheric sciences is a category with the most studies in this field. A very small number of studies on PM-bound water conduct a quantitative measurement of its presence in ambient particles or its origin. Most articles rather point PM-bound water as an artifact in organic carbon and ions measurements without any chemical analysis of its contents. This scientometric study presents the current and most actual literature regarding particulate bound water.

Keywords: systematic review, aerosol-bound water, PM-bound water, CiteSpace, knowledge domain

Procedia PDF Downloads 105
217 Principal Well-Being at Hong Kong: A Quantitative Investigation

Authors: Junjun Chen, Yingxiu Li

Abstract:

The occupational well-being of school principals has played a vital role in the pursuit of individual and school wellness and success. However, principals’ well-being worldwide is under increasing threat because of the challenging and complex nature of their work and growing demands for school standardisation and accountability. Pressure is particularly acute in the post-pandemicfuture as principals attempt to deal with the impact of the pandemic on top of more regular demands. This is particularly true in Hong Kong, as school principals are increasingly wedged between unparalleled political, social, and academic responsibilities. Recognizing the semantic breadth of well-being, scholars have not determined a single, mutually agreeable definition but agreed that the concept of well-being has multiple dimensions across various disciplines. The multidimensional approach promises more precise assessments of the relationships between well-being and other concepts than the ‘affect-only’ approach or other single domains for capturing the essence of principal well-being. The multiple-dimension well-being concept is adopted in this project to understand principal well-being in this study. This study aimed to understand the situation of principal well-being and its influential drivers with a sample of 670 principals from Hong Kong and Mainland China. An online survey was sent to the participants after the breakout of COVID-19 by the researchers. All participants were well informed about the purposes and procedure of the project and the confidentiality of the data prior to filling in the questionnaire. Confirmatory factor analysis and structural equation modelling performed with Mplus were employed to deal with the dataset. The data analysis procedure involved the following three steps. First, the descriptive statistics (e.g., mean and standard deviation) were calculated. Second, confirmatory factor analysis (CFA) was used to trim principal well-being measurement performed with maximum likelihood estimation. Third, structural equation modelling (SEM) was employed to test the influential factors of principal well-being. The results of this study indicated that the overall of principal well-being were above the average mean score. The highest ranking in this study given by the principals was to their psychological and social well-being (M = 5.21). This was followed by spiritual (M = 5.14; SD = .77), cognitive (M = 5.14; SD = .77), emotional (M = 4.96; SD = .79), and physical well-being (M = 3.15; SD = .73). Participants ranked their physical well-being the lowest. Moreover, professional autonomy, supervisor and collegial support, school physical conditions, professional networking, and social media have showed a significant impact on principal well-being. The findings of this study will potentially enhance not only principal well-being, but also the functioning of an individual principal and a school without sacrificing principal well-being for quality education in the process. This will eventually move one step forward for a new future - a wellness society advocated by OECD. Importantly, well-being is an inside job that begins with choosing to have wellness, whilst supports to become a wellness principal are also imperative.

Keywords: well-being, school principals, quantitative, influential factors

Procedia PDF Downloads 60
216 Measurement of Magnetic Properties of Grainoriented Electrical Steels at Low and High Fields Using a Novel Single

Authors: Nkwachukwu Chukwuchekwa, Joy Ulumma Chukwuchekwa

Abstract:

Magnetic characteristics of grain-oriented electrical steel (GOES) are usually measured at high flux densities suitable for its typical applications in power transformers. There are limited magnetic data at low flux densities which are relevant for the characterization of GOES for applications in metering instrument transformers and low frequency magnetic shielding in magnetic resonance imaging medical scanners. Magnetic properties such as coercivity, B-H loop, AC relative permeability and specific power loss of conventional grain oriented (CGO) and high permeability grain oriented (HGO) electrical steels were measured and compared at high and low flux densities at power magnetising frequency. 40 strips comprising 20 CGO and 20 HGO, 305 mm x 30 mm x 0.27 mm from a supplier were tested. The HGO and CGO strips had average grain sizes of 9 mm and 4 mm respectively. Each strip was singly magnetised under sinusoidal peak flux density from 8.0 mT to 1.5 T at a magnetising frequency of 50 Hz. The novel single sheet tester comprises a personal computer in which LabVIEW version 8.5 from National Instruments (NI) was installed, a NI 4461 data acquisition (DAQ) card, an impedance matching transformer, to match the 600  minimum load impedance of the DAQ card with the 5 to 20  low impedance of the magnetising circuit, and a 4.7 Ω shunt resistor. A double vertical yoke made of GOES which is 290 mm long and 32 mm wide is used. A 500-turn secondary winding, about 80 mm in length, was wound around a plastic former, 270 mm x 40 mm, housing the sample, while a 100-turn primary winding, covering the entire length of the plastic former was wound over the secondary winding. A standard Epstein strip to be tested is placed between the yokes. The magnetising voltage was generated by the LabVIEW program through a voltage output from the DAQ card. The voltage drop across the shunt resistor and the secondary voltage were acquired by the card for calculation of magnetic field strength and flux density respectively. A feedback control system implemented in LabVIEW was used to control the flux density and to make the induced secondary voltage waveforms sinusoidal to have repeatable and comparable measurements. The low noise NI4461 card with 24 bit resolution and a sampling rate of 204.8 KHz and 92 KHz bandwidth were chosen to take the measurements to minimize the influence of thermal noise. In order to reduce environmental noise, the yokes, sample and search coil carrier were placed in a noise shielding chamber. HGO was found to have better magnetic properties at both high and low magnetisation regimes. This is because of the higher grain size of HGO and higher grain-grain misorientation of CGO. HGO is better CGO in both low and high magnetic field applications.

Keywords: flux density, electrical steel, LabVIEW, magnetization

Procedia PDF Downloads 272
215 Dual-Layer Microporous Layer of Gas Diffusion Layer for Proton Exchange Membrane Fuel Cells under Various RH Conditions

Authors: Grigoria Athanasaki, Veerarajan Vimala, A. M. Kannan, Louis Cindrella

Abstract:

Energy usage has been increased throughout the years, leading to severe environmental impacts. Since the majority of the energy is currently produced from fossil fuels, there is a global need for clean energy solutions. Proton Exchange Membrane Fuel Cells (PEMFCs) offer a very promising solution for transportation applications because of their solid configuration and low temperature operations, which allows them to start quickly. One of the main components of PEMFCs is the Gas Diffusion Layer (GDL), which manages water and gas transport and shows direct influence on the fuel cell performance. In this work, a novel dual-layer GDL with gradient porosity was prepared, using polyethylene glycol (PEG) as pore former, to improve the gas diffusion and water management in the system. The microporous layer (MPL) of the fabricated GDL consists of carbon powder PUREBLACK, sodium dodecyl sulfate as a surfactant, 34% wt. PTFE and the gradient porosity was created by applying one layer using 30% wt. PEG on the carbon substrate, followed by a second layer without using any pore former. The total carbon loading of the microporous layer is ~ 3 mg.cm-2. For the assembly of the catalyst layer, Nafion membrane (Ion Power, Nafion Membrane NR211) and Pt/C electrocatalyst (46.1% wt.) were used. The catalyst ink was deposited on the membrane via microspraying technique. The Pt loading is ~ 0.4 mg.cm-2, and the active area is 5 cm2. The sample was ex-situ characterized via wetting angle measurement, Scanning Electron Microscopy (SEM), and Pore Size Distribution (PSD) to evaluate its characteristics. Furthermore, for the performance evaluation in-situ characterization via Fuel Cell Testing using H2/O2 and H2/air as reactants, under 50, 60, 80, and 100% relative humidity (RH), took place. The results were compared to a single layer GDL, fabricated with the same carbon powder and loading as the dual layer GDL, and a commercially available GDL with MPL (AvCarb2120). The findings reveal high hydrophobic properties of the microporous layer of the GDL for both PUREBLACK based samples, while the commercial GDL demonstrates hydrophilic behavior. The dual layer GDL shows high and stable fuel cell performance under all the RH conditions, whereas the single layer manifests a drop in performance at high RH in both oxygen and air, caused by catalyst flooding. The commercial GDL shows very low and unstable performance, possibly because of its hydrophilic character and thinner microporous layer. In conclusion, the dual layer GDL with PEG appears to have improved gas diffusion and water management in the fuel cell system. Due to its increasing porosity from the catalyst layer to the carbon substrate, it allows easier access of the reactant gases from the flow channels to the catalyst layer, and more efficient water removal from the catalyst layer, leading to higher performance and stability.

Keywords: gas diffusion layer, microporous layer, proton exchange membrane fuel cells, relative humidity

Procedia PDF Downloads 105
214 Facies Sedimentology and Astronomic Calibration of the Reinech Member (Lutetian)

Authors: Jihede Haj Messaoud, Hamdi Omar, Hela Fakhfakh Ben Jemia, Chokri Yaich

Abstract:

The Upper Lutetian alternating marl–limestone succession of Reineche Member was deposited over a warm shallow carbonate platform that permits Nummulites proliferation. High-resolution studies of 30 meters thick Nummulites-bearing Reineche Member, cropping out in Central Tunisia (Jebel Siouf), have been undertaken, regarding pronounced cyclical sedimentary sequences, in order to investigate the periodicity of cycles and their related orbital-scale oceanic and climatic changes. The palaeoenvironmental and palaeoclimatic data are preserved in several proxies obtainable through high-resolution sampling and laboratories measurement and analysis as magnetic susceptibility (MS) and carbonates contents in conjunction with a wireline logging tools. The time series analysis of proxies permits to establish cyclicity orders present in the studied intervals which could be linked to the orbital cycles. MS records provide high-resolution proxies for relative sea level change in Late Lutetian strata. The spectral analysis of MS fluctuations confirmed the orbital forcing by the presence of the complete suite of orbital frequencies in the precession of 23 ka, the obliquity of 41 ka, and notably the two modes of eccentricity of 100 and 405 ka. Regarding the two periodic sedimentary cycles detected by wavelet analysis of proxy fluctuations which coincide with the long-term 405 ka eccentricity cycle, the Reineche Member spanned 0,8 Myr. Wireline logging tools as gamma ray and sonic were used as a proxies to decipher cyclicity and trends in sedimentation and contribute to identifying and correlate units. There are used to constraint the highest frequency cyclicity modulated by a long term wavelength cycling apparently controlled by clay content. Interpreted as a result of variations in carbonate productivity, it has been suggested that the marl-limestone couplets, represent the sedimentary response to the orbital forcing. The calculation of cycle durations through Reineche Member, is used as a geochronometer and permit the astronomical calibration of the geologic time scale. Furthermore, MS coupled with carbonate contents, and fossil occurrences provide strong evidence for combined detrital inputs and marine surface carbonate productivity cycles. These two synchronous processes were driven by the precession index and ‘fingerprinted’ in the basic marl–limestone couplets, modulated by orbital eccentricity.

Keywords: magnetic susceptibility, cyclostratigraphy, orbital forcing, spectral analysis, Lutetian

Procedia PDF Downloads 275
213 Building an Opinion Dynamics Model from Experimental Data

Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.

Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule

Procedia PDF Downloads 86
212 Video Analytics on Pedagogy Using Big Data

Authors: Jamuna Loganath

Abstract:

Education is the key to the development of any individual’s personality. Today’s students will be tomorrow’s citizens of the global society. The education of the student is the edifice on which his/her future will be built. Schools therefore should provide an all-round development of students so as to foster a healthy society. The behaviors and the attitude of the students in school play an essential role for the success of the education process. Frequent reports of misbehaviors such as clowning, harassing classmates, verbal insults are becoming common in schools today. If this issue is left unattended, it may develop a negative attitude and increase the delinquent behavior. So, the need of the hour is to find a solution to this problem. To solve this issue, it is important to monitor the students’ behaviors in school and give necessary feedback and mentor them to develop a positive attitude and help them to become a successful grownup. Nevertheless, measuring students’ behavior and attitude is extremely challenging. None of the present technology has proven to be effective in this measurement process because actions, reactions, interactions, response of the students are rarely used in the course of the data due to complexity. The purpose of this proposal is to recommend an effective supervising system after carrying out a feasibility study by measuring the behavior of the Students. This can be achieved by equipping schools with CCTV cameras. These CCTV cameras installed in various schools of the world capture the facial expressions and interactions of the students inside and outside their classroom. The real time raw videos captured from the CCTV can be uploaded to the cloud with the help of a network. The video feeds get scooped into various nodes in the same rack or on the different racks in the same cluster in Hadoop HDFS. The video feeds are converted into small frames and analyzed using various Pattern recognition algorithms and MapReduce algorithm. Then, the video frames are compared with the bench marking database (good behavior). When misbehavior is detected, an alert message can be sent to the counseling department which helps them in mentoring the students. This will help in improving the effectiveness of the education process. As Video feeds come from multiple geographical areas (schools from different parts of the world), BIG DATA helps in real time analysis as it analyzes computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. It also analyzes data that can’t be analyzed by traditional software applications such as RDBMS, OODBMS. It has also proven successful in handling human reactions with ease. Therefore, BIG DATA could certainly play a vital role in handling this issue. Thus, effectiveness of the education process can be enhanced with the help of video analytics using the latest BIG DATA technology.

Keywords: big data, cloud, CCTV, education process

Procedia PDF Downloads 220
211 Muscle and Cerebral Regional Oxygenation in Preterm Infants with Shock Using Near-Infrared Spectroscopy

Authors: Virany Diana, Martono Tri Utomo, Risa Etika

Abstract:

Background: Shock is one severe condition that can be a major cause of morbidity and mortality in the Neonatal Intensive Care Unit. Preterm infants are very susceptible to shock caused by many complications such as asphyxia, patent ductus arteriosus, intra ventricle haemorrhage, necrotizing enterocolitis, persistent pulmonal hypertension of the newborn, and septicaemia. Limited hemodynamic monitoring for early detection of shock causes delayed intervention and comprises the outcomes. Clinical parameters still used in neonatal shock detection, such as Capillary Refill Time, heart rate, cold extremity, and urine production. Blood pressure is most frequently used to evaluate preterm's circulation, but hypotension indicates uncompensated shock. Near-infrared spectroscopy (NIRS) is known as a noninvasive tool for monitoring and detecting the state of inadequate tissue perfusion. Muscle oxygen saturation shows decreased cardiac output earlier than systemic parameters of tissue oxygenation when cerebral regional oxygen saturation is still stabilized by autoregulation. However, to our best knowledge, until now, no study has analyzed the decrease of muscle oxygen regional saturation (mRSO₂) and the ratio of muscle and cerebral oxygen regional saturation (mRSO₂/cRSO₂) by NIRS in preterm with shock. Purpose: The purpose of this study is to analyze the decrease of mRSO₂ and ratio of muscle to cerebral oxygen regional saturation (mRSO₂/cRSO₂) by NIRS in preterm with shock. Patients and Methods: This cross-sectional study was conducted on preterm infants with 28-34 weeks gestational age, admitted to the NICU of Dr. Soetomo Hospital from November to January 2022. Patients were classified into two groups: shock and non-shock. The diagnosis of shock is based on clinical criteria (tachycardia, prolonged CRT, cold extremity, decreased urine production, and MAP Blood Pressure less than GA in weeks). Measurement of mRSO₂ and cRSO₂ by NIRS was performed by the doctor in charge when the patient came to NICU. Results: We enrolled 40 preterm infants. The initial conventional hemodynamic parameter as the basic diagnosis of shock showed significant differences in all variables. Preterm with shock had higher mean HR (186.45±1.5), lower MAP (29.8±2.1), and lower SBP (45.1±4.28) than non-shock children, and most had a prolonged CRT. The patients’ outcome was not a significant difference between shock and non-shock patients. The mean mRSO₂ in the shock and non-shock groups were 33,65 ± 11,32 vs. 69,15 ± 3,96 (p=0.001), and the mean ratio mRSO₂/cRSO₂ 0,45 ± 0,12 vs. 0,84 ± 0,43 (p=0,001), were significantly different. The mean cRSO₂ in the shock and non-shock groups were 71,60 ± 4,90 vs. 81,85 ± 7,85 (p 0.082), not significantly different. Conclusion: The decrease of mRSO₂ and ratio of mRSO₂/cRSO₂ can differentiate between shock and non-shock in the preterm infant when cRSO₂ is still normal.

Keywords: preterm infant, regional muscle oxygen saturation, regional cerebral oxygen saturation, NIRS, shock

Procedia PDF Downloads 62
210 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 383
209 Value Adding of Waste Biomass of Capsicum and Chilli Crops for Medical and Health Supplement Industries

Authors: Mursleen Yasin, Sunil Panchal, Michelle Mak, Zhonghua Chen

Abstract:

“The use of agricultural and horticultural waste to obtain beneficial products. Thus reduce its environmental impact and help the general population.” Every year 20 billion dollars of food is wasted in the world. All the energy, resources, nutrients and metabolites are lost to the landfills as well. On farm production losses are a main issue in agriculture. Almost 25% vegetables never leave the farm because they are not considered perfect for supermarkets and treated as waste material along with the rest of the plant parts. For capsicums, this waste is 56% of the total crop. Capsicum genus is enriched with a group of compounds called capsaicinoids which are a source of spiciness of these fruits. Capsaicin and dihydrocapsaicin are the major members comprising almost 90% of this group. The major production and accumulation site is the non-edible part of fruit i.e., placenta. Other parts of the plant, like stem, leaves, pericarp and seeds, also contain these pungent compounds. Capsaicinoids are enriched with properties like analgesic, antioxidants, anti-inflammatory, antibacterial, anti-virulence anti-carcinogenic, chemo preventive, chemotherapeutic, antidiabetic etc. They are also effective in treating problems related to gastrointestinal tract, lowering cholesterol and triglycerides in obesity. The aim of the study is to develop a standardised technique for capsaicinoids extraction and to identify better nutrient treatment for fruit and capsaicinoids yield. For research 3 capsicum and 2 chilli varieties were grown in a high-tech glass house facility in Sydney, Australia. Plants were treated with three levels of nutrient treatments i.e., EC 1.8, EC 2.8 and EC 3.8 in order to check its effect on fruit yield and capsaicinoids concentration. Solvent extraction procedure is used with 75% ethanol to extract these secondary metabolites. Physiological, post-harvest and waste biomass measurement and metabolomic analysis are also performed. The results showed that EC 2.8 gave the better fruit yield of capsicums, and those fruits have the higher capsaicinoids concentration. For chillies, higher EC levels had better results than lower treatment. The UHPLC analysis is done to quantify the compounds, and a decrease in capsaicin concentration is observed with the crop maturation. The outcome of this project is a sustainable technique for extraction of capsaicinoids which can easily be adopted by farmers. In this way, farmers can help in value adding of waste by extracting and selling capsaicinoids to nutraceutical and pharmaceutical industries and also earn some secondary income from the 56% waste of capsicum crop.

Keywords: capsaicinoids, plant waste, capsicum, solvent extraction, waste biomass

Procedia PDF Downloads 49