Search results for: partial least square
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2831

Search results for: partial least square

41 Comparing Community Health Agents, Physicians and Nurses in Brazil's Family Health Strategy

Authors: Rahbel Rahman, Rogério Meireles Pinto, Margareth Santos Zanchetta

Abstract:

Background: Existing shortcomings of current health-service delivery include poor teamwork, competencies that do not address consumer needs, and episodic rather than continuous care. Brazil’s Sistema Único de Saúde (Unified Health System, UHS) is acknowledged worldwide as a model for delivering community-based care through Estratégia Saúde da Família (FHS; Family Health Strategy) interdisciplinary teams, comprised of Community Health Agents (in Portuguese, Agentes Comunitário de Saude, ACS), nurses, and physicians. FHS teams are mandated to collectively offer clinical care, disease prevention services, vector control, health surveillance and social services. Our study compares medical providers (nurses and physicians) and community-based providers (ACS) on their perceptions of work environment, professional skills, cognitive capacities and job context. Global health administrators and policy makers can leverage on similarities and differences across care providers to develop interprofessional training for community-based primary care. Methods: Cross-sectional data were collected from 168 ACS, 62 nurses and 32 physicians in Brazil. We compared providers’ demographic characteristics (age, race, and gender) and job context variables (caseload, work experience, work proximity to community, the length of commute, and familiarity with the community). Providers perceptions were compared to their work environment (work conditions and work resources), professional skills (consumer-input, interdisciplinary collaboration, efficacy of FHS teams, work-methods and decision-making autonomy), and cognitive capacities (knowledge and skills, skill variety, confidence and perseverance). Descriptive and bi-variate analysis, such as Pearson Chi-square and Analysis of Variance (ANOVA) F-tests, were performed to draw comparisons across providers. Results: Majority of participants were ACS (64%); 24% nurses; and 12% physicians. Majority of nurses and ACS identified as mixed races (ACS, n=85; nurses, n=27); most physicians identified as males (n=16; 52%), and white (n=18; 58%). Physicians were less likely to incorporate consumer-input and demonstrated greater decision-making autonomy than nurses and ACS. ACS reported the highest levels of knowledge and skills but the least confidence compared to nurses and physicians. ACS, nurses, and physicians were efficacious that FHS teams improved the quality of health in their catchment areas, though nurses tend to disagree that interdisciplinary collaboration facilitated their work. Conclusion: To our knowledge, there has been no study comparing key demographic and cognitive variables across ACS, nurses and physicians in the context of their work environment and professional training. We suggest that global health systems can leverage upon the diverse perspectives of providers to implement a community-based primary care model grounded in interprofessional training. Our study underscores the need for in-service trainings to instill reflective skills of providers, improve communication skills of medical providers and curative skills of ACS. Greater autonomy needs to be extended to community based providers to offer care integral to addressing consumer and community needs.

Keywords: global health systems, interdisciplinary health teams, community health agents, community-based care

Procedia PDF Downloads 229
40 Regional Hydrological Extremes Frequency Analysis Based on Statistical and Hydrological Models

Authors: Hadush Kidane Meresa

Abstract:

The hydrological extremes frequency analysis is the foundation for the hydraulic engineering design, flood protection, drought management and water resources management and planning to utilize the available water resource to meet the desired objectives of different organizations and sectors in a country. This spatial variation of the statistical characteristics of the extreme flood and drought events are key practice for regional flood and drought analysis and mitigation management. For different hydro-climate of the regions, where the data set is short, scarcity, poor quality and insufficient, the regionalization methods are applied to transfer at-site data to a region. This study aims in regional high and low flow frequency analysis for Poland River Basins. Due to high frequent occurring of hydrological extremes in the region and rapid water resources development in this basin have caused serious concerns over the flood and drought magnitude and frequencies of the river in Poland. The magnitude and frequency result of high and low flows in the basin is needed for flood and drought planning, management and protection at present and future. Hydrological homogeneous high and low flow regions are formed by the cluster analysis of site characteristics, using the hierarchical and C- mean clustering and PCA method. Statistical tests for regional homogeneity are utilized, by Discordancy and Heterogeneity measure tests. In compliance with results of the tests, the region river basin has been divided into ten homogeneous regions. In this study, frequency analysis of high and low flows using AM for high flow and 7-day minimum low flow series is conducted using six statistical distributions. The use of L-moment and LL-moment method showed a homogeneous region over entire province with Generalized logistic (GLOG), Generalized extreme value (GEV), Pearson type III (P-III), Generalized Pareto (GPAR), Weibull (WEI) and Power (PR) distributions as the regional drought and flood frequency distributions. The 95% percentile and Flow duration curves of 1, 7, 10, 30 days have been plotted for 10 stations. However, the cluster analysis performed two regions in west and east of the province where L-moment and LL-moment method demonstrated the homogeneity of the regions and GLOG and Pearson Type III (PIII) distributions as regional frequency distributions for each region, respectively. The spatial variation and regional frequency distribution of flood and drought characteristics for 10 best catchment from the whole region was selected and beside the main variable (streamflow: high and low) we used variables which are more related to physiographic and drainage characteristics for identify and delineate homogeneous pools and to derive best regression models for ungauged sites. Those are mean annual rainfall, seasonal flow, average slope, NDVI, aspect, flow length, flow direction, maximum soil moisture, elevation, and drainage order. The regional high-flow or low-flow relationship among one streamflow characteristics with (AM or 7-day mean annual low flows) some basin characteristics is developed using Generalized Linear Mixed Model (GLMM) and Generalized Least Square (GLS) regression model, providing a simple and effective method for estimation of flood and drought of desired return periods for ungauged catchments.

Keywords: flood , drought, frequency, magnitude, regionalization, stochastic, ungauged, Poland

Procedia PDF Downloads 602
39 InAs/GaSb Superlattice Photodiode Array ns-Response

Authors: Utpal Das, Sona Das

Abstract:

InAs/GaSb type-II superlattice (T2SL) Mid-wave infrared (MWIR) focal plane arrays (FPAs) have recently seen rapid development. However, in small pixel size large format FPAs, the occurrence of high mesa sidewall surface leakage current is a major constraint necessitating proper surface passivation. A simple pixel isolation technique in InAs/GaSb T2SL detector arrays without the conventional mesa etching has been proposed to isolate the pixels by forming a more resistive higher band gap material from the SL, in the inter-pixel region. Here, a single step femtosecond (fs) laser anneal of the T2SL structure of the inter-pixel T2SL regions, have been used to increase the band gap between the pixels by QW-intermixing and hence increase isolation between the pixels. The p-i-n photodiode structure used here consists of a 506nm, (10 monolayer {ML}) InAs:Si (1x10¹⁸cm⁻³)/(10ML) GaSb SL as the bottom n-contact layer grown on an n-type GaSb substrate. The undoped absorber layer consists of 1.3µm, (10ML)InAs/(10ML)GaSb SL. The top p-contact layer is a 63nm, (10ML)InAs:Be(1x10¹⁸cm⁻³)/(10ML)GaSb T2SL. In order to improve the carrier transport, a 126nm of graded doped (10ML)InAs/(10ML)GaSb SL layer was added between the absorber and each contact layers. A 775nm 150fs-laser at a fluence of ~6mJ/cm² is used to expose the array where the pixel regions are masked by a Ti(200nm)-Au(300nm) cap. Here, in the inter-pixel regions, the p+ layer have been reactive ion etched (RIE) using CH₄+H₂ chemistry and removed before fs-laser exposure. The fs-laser anneal isolation improvement in 200-400μm pixels due to spatially selective quantum well intermixing for a blue shift of ~70meV in the inter-pixel regions is confirmed by FTIR measurements. Dark currents are measured between two adjacent pixels with the Ti(200nm)-Au(300nm) caps used as contacts. The T2SL quality in the active photodiode regions masked by the Ti-Au cap is hardly affected and retains the original quality of the detector. Although, fs-laser anneal of p+ only etched p-i-n T2SL diodes show a reduction in the reverse dark current, no significant improvement in the full RIE-etched mesa structures is noticeable. Hence for a 128x128 array fabrication of 8μm square pixels and 10µm pitch, SU8 polymer isolation after RIE pixel delineation has been used. X-n+ row contacts and Y-p+ column contacts have been used to measure the optical response of the individual pixels. The photo-response of these 8μm and other 200μm pixels under a 2ns optical pulse excitation from an Optical-Parametric-Oscillator (OPO), shows a peak responsivity of ~0.03A/W and 0.2mA/W, respectively, at λ~3.7μm. Temporal response of this detector array is seen to have a fast response ~10ns followed typical slow decay with ringing, attributed to impedance mismatch of the connecting co-axial cables. In conclusion, response times of a few ns have been measured in 8µm pixels of a 128x128 array. Although fs-laser anneal has been found to be useful in increasing the inter-pixel isolation in InAs/GaSb T2SL arrays by QW inter-mixing, it has not been found to be suitable for passivation of full RIE etched mesa structures with vertical walls on InAs/GaSb T2SL.

Keywords: band-gap blue-shift, fs-laser-anneal, InAs/GaSb T2SL, Inter-pixel isolation, ns-Response, photodiode array

Procedia PDF Downloads 152
38 Automatic Content Curation of Visual Heritage

Authors: Delphine Ribes Lemay, Valentine Bernasconi, André Andrade, Lara DéFayes, Mathieu Salzmann, FréDéRic Kaplan, Nicolas Henchoz

Abstract:

Digitization and preservation of large heritage induce high maintenance costs to keep up with the technical standards and ensure sustainable access. Creating impactful usage is instrumental to justify the resources for long-term preservation. The Museum für Gestaltung of Zurich holds one of the biggest poster collections of the world from which 52’000 were digitised. In the process of building a digital installation to valorize the collection, one objective was to develop an algorithm capable of predicting the next poster to show according to the ones already displayed. The work presented here describes the steps to build an algorithm able to automatically create sequences of posters reflecting associations performed by curator and professional designers. The exposed challenge finds similarities with the domain of song playlist algorithms. Recently, artificial intelligence techniques and more specifically, deep-learning algorithms have been used to facilitate their generations. Promising results were found thanks to Recurrent Neural Networks (RNN) trained on manually generated playlist and paired with clusters of extracted features from songs. We used the same principles to create the proposed algorithm but applied to a challenging medium, posters. First, a convolutional autoencoder was trained to extract features of the posters. The 52’000 digital posters were used as a training set. Poster features were then clustered. Next, an RNN learned to predict the next cluster according to the previous ones. RNN training set was composed of poster sequences extracted from a collection of books from the Gestaltung Museum of Zurich dedicated to displaying posters. Finally, within the predicted cluster, the poster with the best proximity compared to the previous poster is selected. The mean square distance between features of posters was used to compute the proximity. To validate the predictive model, we compared sequences of 15 posters produced by our model to randomly and manually generated sequences. Manual sequences were created by a professional graphic designer. We asked 21 participants working as professional graphic designers to sort the sequences from the one with the strongest graphic line to the one with the weakest and to motivate their answer with a short description. The sequences produced by the designer were ranked first 60%, second 25% and third 15% of the time. The sequences produced by our predictive model were ranked first 25%, second 45% and third 30% of the time. The sequences produced randomly were ranked first 15%, second 29%, and third 55% of the time. Compared to designer sequences, and as reported by participants, model and random sequences lacked thematic continuity. According to the results, the proposed model is able to generate better poster sequencing compared to random sampling. Eventually, our algorithm is sometimes able to outperform a professional designer. As a next step, the proposed algorithm should include a possibility to create sequences according to a selected theme. To conclude, this work shows the potentiality of artificial intelligence techniques to learn from existing content and provide a tool to curate large sets of data, with a permanent renewal of the presented content.

Keywords: Artificial Intelligence, Digital Humanities, serendipity, design research

Procedia PDF Downloads 184
37 Microsimulation of Potential Crashes as a Road Safety Indicator

Authors: Vittorio Astarita, Giuseppe Guido, Vincenzo Pasquale Giofre, Alessandro Vitale

Abstract:

Traffic microsimulation has been used extensively to evaluate consequences of different traffic planning and control policies in terms of travel time delays, queues, pollutant emissions, and every other common measured performance while at the same time traffic safety has not been considered in common traffic microsimulation packages as a measure of performance for different traffic scenarios. Vehicle conflict techniques that were introduced at intersections in the early traffic researches carried out at the General Motor laboratory in the USA and in the Swedish traffic conflict manual have been applied to vehicles trajectories simulated in microscopic traffic simulators. The concept is that microsimulation can be used as a base for calculating the number of conflicts that will define the safety level of a traffic scenario. This allows engineers to identify unsafe road traffic maneuvers and helps in finding the right countermeasures that can improve safety. Unfortunately, most commonly used indicators do not consider conflicts between single vehicles and roadside obstacles and barriers. A great number of vehicle crashes take place with roadside objects or obstacles. Only some recent proposed indicators have been trying to address this issue. This paper introduces a new procedure based on the simulation of potential crash events for the evaluation of safety levels in microsimulation traffic scenarios, which takes into account also potential crashes with roadside objects and barriers. The procedure can be used to define new conflict indicators. The proposed simulation procedure generates with the random perturbation of vehicle trajectories a set of potential crashes which can be evaluated accurately in terms of DeltaV, the energy of the impact, and/or expected number of injuries or casualties. The procedure can also be applied to real trajectories giving birth to new surrogate safety performance indicators, which can be considered as “simulation-based”. The methodology and a specific safety performance indicator are described and applied to a simulated test traffic scenario. Results indicate that the procedure is able to evaluate safety levels both at the intersection level and in the presence of roadside obstacles. The procedure produces results that are expressed in the same unity of measure for both vehicle to vehicle and vehicle to roadside object conflicts. The total energy for a square meter of all generated crash can be used and is shown on the map, for the test network, after the application of a threshold to evidence the most dangerous points. Without any detailed calibration of the microsimulation model and without any calibration of the parameters of the procedure (standard values have been used), it is possible to identify dangerous points. A preliminary sensitivity analysis has shown that results are not dependent on the different energy thresholds and different parameters of the procedure. This paper introduces a specific new procedure and the implementation in the form of a software package that is able to assess road safety, also considering potential conflicts with roadside objects. Some of the principles that are at the base of this specific model are discussed. The procedure can be applied on common microsimulation packages once vehicle trajectories and the positions of roadside barriers and obstacles are known. The procedure has many calibration parameters and research efforts will have to be devoted to make confrontations with real crash data in order to obtain the best parameters that have the potential of giving an accurate evaluation of the risk of any traffic scenario.

Keywords: road safety, traffic, traffic safety, traffic simulation

Procedia PDF Downloads 135
36 Biotite from Contact-Metamorphosed Rocks of the Dizi Series of the Greater Caucasus

Authors: Irakli Javakhishvili, Tamara Tsutsunava, Giorgi Beridze

Abstract:

The Caucasus is a component of the Mediterranean collision belt. The Dizi series is situated within the Greater Caucasian region of the Caucasus and crops out in the core of the Svaneti anticlinorium. The series was formed in the continental slope conditions on the southern passive margin of the small ocean basin. The Dizi series crops out on about 560 square km with the thickness 2000-2200 m. The rocks are faunally dated from the Devonian to the Triassic inclusive. The series is composed of terrigenous phyllitic schists, sandstones, quartzite aleurolites and lenses and interlayers of marbleized limestones. During the early Cimmerian orogeny, they underwent regional metamorphism of chlorite-sericite subfacies of greenschist facies. Typical minerals of metapelites are chlorite, sericite, augite, quartz, and tourmaline, but of basic rocks - actinolite, fibrolite, prehnite, calcite, and chlorite are developed. Into the Dizi series, polyphase intrusions of gabbros, diorites, quartz-diorites, syenite-diorites, syenites, and granitoids are intruded. Their K-Ar age dating (176-165Ma) points out that their formation corresponds to the Bathonian orogeny. The Dizi series is well-studied geologically, but very complicated processes of its regional and contact metamorphisms are insufficiently investigated. The aim of the authors was a detailed study of contact metamorphism processes of the series rocks. Investigations were accomplished applying the following methodologies: finding of key sections, a collection of material, microscopic study of samples, microprobe and structural analysis of minerals and X-ray determination of elements. The Dizi series rocks formed under the influence of the Bathonian magmatites on metapelites and carbonate-enriched rocks. They are represented by quartz, biotite, sericite, graphite, andalusite, muscovite, plagioclase, corundum, cordierite, clinopyroxene, hornblende, cummingtonite, actinolite, and tremolite bearing hornfels, marbles, and skarns. The contact metamorphism aureole reaches 350 meters. Biotite is developed only in contact-metamorphosed rocks and is a rather informative index mineral. In metapelites, biotite is formed as a result of the reaction between phengite, chlorite, and leucoxene, but in basites, it replaces actinolite or actinolite-hornblende. To study the compositional regularities of biotites, they were investigated from both - metapelites and metabasites. In total, biotite from the basites is characterized by an increased of titanium in contrast to biotite from metapelites. Biotites from metapelites are distinguished by an increased amount of aluminum. In biotites an increased amount of titanium and aluminum is observed as they approximate the contact, while their magnesia content decreases. Metapelite biotites are characterized by an increased amount of alumina in aluminum octahedrals, in contrast to biotite of the basites. In biotites of metapelites, the amount of tetrahedric aluminum is 28–34%, octahedral - 15–26%, and in basites tetrahedral aluminum is 28–33%, and octahedral 7–21%. As a result of the study of minerals, including biotite, from the contact-metamorphosed rocks of the Dizi series three exocontact zones with corresponding mineral assemblages were identified. It was established that contact metamorphism in the aureole of the Dizi series intrusions is going on at a significantly higher temperature and lower pressure than the regional metamorphism preceding the contact metamorphism.

Keywords: biotite, contact metamorphism, Dizi series, the Greater Caucasus

Procedia PDF Downloads 132
35 Correlation of Clinical and Sonographic Findings with Cytohistology for Diagnosis of Ovarian Tumours

Authors: Meenakshi Barsaul Chauhan, Aastha Chauhan, Shilpa Hurmade, Rajeev Sen, Jyotsna Sen, Monika Dalal

Abstract:

Introduction: Ovarian masses are common forms of neoplasm in women and represent 2/3rd of gynaecological malignancies. A pre-operative suggestion of malignancy can guide the gynecologist to refer women with suspected pelvic mass to a gynecological oncologist for appropriate therapy and optimized treatment, which can improve survival. In the younger age group preoperative differentiation into benign or malignant pathology can decide for conservative or radical surgery. Imaging modalities have a definite role in establishing the diagnosis. By using International Ovarian Tumor Analysis (IOTA) classification with sonography, costly radiological methods like Magnetic Resonance Imaging (MRI) / computed tomography (CT) scan can be reduced, especially in developing countries like India. Thus, this study is being undertaken to evaluate the role of clinical methods and sonography for diagnosis of the nature of the ovarian tumor. Material And Methods: This prospective observational study was conducted on 40 patients presenting with ovarian masses, in the Department of Obstetrics and Gynaecology, at a tertiary care center in northern India. Functional cysts were excluded. Ultrasonography and color Doppler were performed on all the cases.IOTA rules were applied, which take into account locularity, size, presence of solid components, acoustic shadow, dopper flow etc . Magnetic Resonance Imaging (MRI) / computed tomography (CT) scans abdomen and pelvis were done in cases where sonography was inconclusive. In inoperable cases, Fine needle aspiration cytology (FNAC) was done. The histopathology report after surgery and cytology report after FNAC was correlated statistically with the pre-operative diagnosis made clinically and sonographically using IOTA rules. Statistical Analysis: Descriptive measures were analyzed by using mean and standard deviation and the Student t-test was applied and the proportion was analyzed by applying the chi-square test. Inferential measures were analyzed by sensitivity, specificity, negative predictive value, and positive predictive value. Results: Provisional diagnosis of the benign tumor was made in 16(42.5%) and of the malignant tumor was made in 24(57.5%) patients on the basis of clinical findings. With IOTA simple rules on sonography, 15(37.5%) were found to be benign, while 23 (57.5%) were found to be malignant and findings were inconclusive in 2 patients (5%). FNAC/Histopathology reported that benign ovarian tumors were 14 (35%) and 26(65%) were malignant, which was taken as the gold standard. The clinical finding alone was found to have a sensitivity of 66.6% and a specificity of 90.9%. USG alone had a sensitivity of 86% and a specificity of 80%. When clinical findings and IOTA simple rules of sonography were combined (excluding inconclusive masses), the sensitivity and specificity were 83.3% and 92.3%, respectively. While including inconclusive masses, sensitivity came out to be 91.6% and specificity was 89.2. Conclusion: IOTA's simple sonography rules are highly sensitive and specific in the prediction of ovarian malignancy and also easy to use and easily reproducible. Thus, combining clinical examination with USG will help in the better management of patients in terms of time, cost and better prognosis. This will also avoid the need for costlier modalities like CT, and MRI.

Keywords: benign, international ovarian tumor analysis classification, malignant, ovarian tumours, sonography

Procedia PDF Downloads 80
34 On the Influence of Sleep Habits for Predicting Preterm Births: A Machine Learning Approach

Authors: C. Fernandez-Plaza, I. Abad, E. Diaz, I. Diaz

Abstract:

Births occurring before the 37th week of gestation are considered preterm births. A threat of preterm is defined as the beginning of regular uterine contractions, dilation and cervical effacement between 23 and 36 gestation weeks. To author's best knowledge, the factors that determine the beginning of the birth are not completely defined yet. In particular, the incidence of sleep habits on preterm births is weekly studied. The aim of this study is to develop a model to predict the factors affecting premature delivery on pregnancy, based on the above potential risk factors, including those derived from sleep habits and light exposure at night (introduced as 12 variables obtained by a telephone survey using two questionnaires previously used by other authors). Thus, three groups of variables were included in the study (maternal, fetal and sleep habits). The study was approved by Research Ethics Committee of the Principado of Asturias (Spain). An observational, retrospective and descriptive study was performed with 481 births between January 1, 2015 and May 10, 2016 in the University Central Hospital of Asturias (Spain). A statistical analysis using SPSS was carried out to compare qualitative and quantitative variables between preterm and term delivery. Chi-square test qualitative variable and t-test for quantitative variables were applied. Statistically significant differences (p < 0.05) between preterm vs. term births were found for primiparity, multi-parity, kind of conception, place of residence or premature rupture of membranes and interruption during nights. In addition to the statistical analysis, machine learning methods to look for a prediction model were tested. In particular, tree based models were applied as the trade-off between performance and interpretability is especially suitable for this study. C5.0, recursive partitioning, random forest and tree bag models were analysed using caret R-package. Cross validation with 10-folds and parameter tuning to optimize the methods were applied. In addition, different noise reduction methods were applied to the initial data using NoiseFiltersR package. The best performance was obtained by C5.0 method with Accuracy 0.91, Sensitivity 0.93, Specificity 0.89 and Precision 0.91. Some well known preterm birth factors were identified: Cervix Dilation, maternal BMI, Premature rupture of membranes or nuchal translucency analysis in the first trimester. The model also identifies other new factors related to sleep habits such as light through window, bedtime on working days, usage of electronic devices before sleeping from Mondays to Fridays or change of sleeping habits reflected in the number of hours, in the depth of sleep or in the lighting of the room. IF dilation < = 2.95 AND usage of electronic devices before sleeping from Mondays to Friday = YES and change of sleeping habits = YES, then preterm is one of the predicting rules obtained by C5.0. In this work a model for predicting preterm births is developed. It is based on machine learning together with noise reduction techniques. The method maximizing the performance is the one selected. This model shows the influence of variables related to sleep habits in preterm prediction.

Keywords: machine learning, noise reduction, preterm birth, sleep habit

Procedia PDF Downloads 147
33 Interactive Virtual Patient Simulation Enhances Pharmacology Education and Clinical Practice

Authors: Lyndsee Baumann-Birkbeck, Sohil A. Khan, Shailendra Anoopkumar-Dukie, Gary D. Grant

Abstract:

Technology-enhanced education tools are being rapidly integrated into health programs globally. These tools provide an interactive platform for students and can be used to deliver topics in various modes including games and simulations. Simulations are of particular interest to healthcare education, where they are employed to enhance clinical knowledge and help to bridge the gap between theory and practice. Simulations will often assess competencies for practical tasks, yet limited research examines the effects of simulation on student perceptions of their learning. The aim of this study was to determine the effects of an interactive virtual patient simulation for pharmacology education and clinical practice on student knowledge, skills and confidence. Ethics approval for the study was obtained from Griffith University Research Ethics Committee (PHM/11/14/HREC). The simulation was intended to replicate the pharmacy environment and patient interaction. The content was designed to enhance knowledge of proton-pump inhibitor pharmacology, role in therapeutics and safe supply to patients. The tool was deployed into a third-year clinical pharmacology and therapeutics course. A number of core practice areas were examined including the competency domains of questioning, counselling, referral and product provision. Baseline measures of student self-reported knowledge, skills and confidence were taken prior to the simulation using a specifically designed questionnaire. A more extensive questionnaire was deployed following the virtual patient simulation, which also included measures of student engagement with the activity. A quiz assessing student factual and conceptual knowledge of proton-pump inhibitor pharmacology and related counselling information was also included in both questionnaires. Sixty-one students (response rate >95%) from two cohorts (2014 and 2015) participated in the study. Chi-square analyses were performed and data analysed using Fishers exact test. Results demonstrate that student knowledge, skills and confidence within the competency domains of questioning, counselling, referral and product provision, show improvement following the implementation of the virtual patient simulation. Statistically significant (p<0.05) improvement occurred in ten of the possible twelve self-reported measurement areas. Greatest magnitude of improvement occurred in the area of counselling (student confidence p<0.0001). Student confidence in all domains (questioning, counselling, referral and product provision) showed a marked increase. Student performance in the quiz also improved, demonstrating a 10% improvement overall for pharmacology knowledge and clinical practice following the simulation. Overall, 85% of students reported the simulation to be engaging and 93% of students felt the virtual patient simulation enhanced learning. The data suggests that the interactive virtual patient simulation developed for clinical pharmacology and therapeutics education enhanced students knowledge, skill and confidence, with respect to the competency domains of questioning, counselling, referral and product provision. These self-reported measures appear to translate to learning outcomes, as demonstrated by the improved student performance in the quiz assessment item. Future research of education using virtual simulation should seek to incorporate modern quantitative measures of student learning and engagement, such as eye tracking.

Keywords: clinical simulation, education, pharmacology, simulation, virtual learning

Procedia PDF Downloads 338
32 Detection of Patient Roll-Over Using High-Sensitivity Pressure Sensors

Authors: Keita Nishio, Takashi Kaburagi, Yosuke Kurihara

Abstract:

Recent advances in medical technology have served to enhance average life expectancy. However, the total time for which the patients are prescribed complete bedrest has also increased. With patients being required to maintain a constant lying posture- also called bedsore- development of a system to detect patient roll-over becomes imperative. For this purpose, extant studies have proposed the use of cameras, and favorable results have been reported. Continuous on-camera monitoring, however, tends to violate patient privacy. We have proposed unconstrained bio-signal measurement system that could detect body-motion during sleep and does not violate patient’s privacy. Therefore, in this study, we propose a roll-over detection method by the date obtained from the bi-signal measurement system. Signals recorded by the sensor were assumed to comprise respiration, pulse, body motion, and noise components. Compared the body-motion and respiration, pulse component, the body-motion, during roll-over, generate large vibration. Thus, analysis of the body-motion component facilitates detection of the roll-over tendency. The large vibration associated with the roll-over motion has a great effect on the Root Mean Square (RMS) value of time series of the body motion component calculated during short 10 s segments. After calculation, the RMS value during each segment was compared to a threshold value set in advance. If RMS value in any segment exceeded the threshold, corresponding data were considered to indicate occurrence of a roll-over. In order to validate the proposed method, we conducted experiment. A bi-directional microphone was adopted as a high-sensitivity pressure sensor and was placed between the mattress and bedframe. Recorded signals passed through an analog Band-pass Filter (BPF) operating over the 0.16-16 Hz bandwidth. BPF allowed the respiration, pulse, and body-motion to pass whilst removing the noise component. Output from BPF was A/D converted with the sampling frequency 100Hz, and the measurement time was 480 seconds. The number of subjects and data corresponded to 5 and 10, respectively. Subjects laid on a mattress in the supine position. During data measurement, subjects—upon the investigator's instruction—were asked to roll over into four different positions—supine to left lateral, left lateral to prone, prone to right lateral, and right lateral to supine. Recorded data was divided into 48 segments with 10 s intervals, and the corresponding RMS value for each segment was calculated. The system was evaluated by the accuracy between the investigator’s instruction and the detected segment. As the result, an accuracy of 100% was achieved. While reviewing the time series of recorded data, segments indicating roll-over tendencies were observed to demonstrate a large amplitude. However, clear differences between decubitus and the roll-over motion could not be confirmed. Extant researches possessed a disadvantage in terms of patient privacy. The proposed study, however, demonstrates more precise detection of patient roll-over tendencies without violating their privacy. As a future prospect, decubitus estimation before and after roll-over could be attempted. Since in this paper, we could not confirm the clear differences between decubitus and the roll-over motion, future studies could be based on utilization of the respiration and pulse components.

Keywords: bedsore, high-sensitivity pressure sensor, roll-over, unconstrained bio-signal measurement

Procedia PDF Downloads 121
31 Measurement System for Human Arm Muscle Magnetic Field and Grip Strength

Authors: Shuai Yuan, Minxia Shi, Xu Zhang, Jianzhi Yang, Kangqi Tian, Yuzheng Ma

Abstract:

The precise measurement of muscle activities is essential for understanding the function of various body movements. This work aims to develop a muscle magnetic field signal detection system based on mathematical analysis. Medical research has underscored that early detection of muscle atrophy, coupled with lifestyle adjustments such as dietary control and increased exercise, can significantly enhance muscle-related diseases. Currently, surface electromyography (sEMG) is widely employed in research as an early predictor of muscle atrophy. Nonetheless, the primary limitation of using sEMG to forecast muscle strength is its inability to directly measure the signals generated by muscles. Challenges arise from potential skin-electrode contact issues due to perspiration, leading to inaccurate signals or even signal loss. Additionally, resistance and phase are significantly impacted by adipose layers. The recent emergence of optically pumped magnetometers introduces a fresh avenue for bio-magnetic field measurement techniques. These magnetometers possess high sensitivity and obviate the need for a cryogenic environment unlike superconducting quantum interference devices (SQUIDs). They detect muscle magnetic field signals in the range of tens to thousands of femtoteslas (fT). The utilization of magnetometers for capturing muscle magnetic field signals remains unaffected by issues of perspiration and adipose layers. Since their introduction, optically pumped atomic magnetometers have found extensive application in exploring the magnetic fields of organs such as cardiac and brain magnetism. The optimal operation of these magnetometers necessitates an environment with an ultra-weak magnetic field. To achieve such an environment, researchers usually utilize a combination of active magnetic compensation technology with passive magnetic shielding technology. Passive magnetic shielding technology uses a magnetic shielding device built with high permeability materials to attenuate the external magnetic field to a few nT. Compared with more layers, the coils that can generate a reverse magnetic field to precisely compensate for the residual magnetic fields are cheaper and more flexible. To attain even lower magnetic fields, compensation coils designed by Biot-Savart law are involved to generate a counteractive magnetic field to eliminate residual magnetic fields. By solving the magnetic field expression of discrete points in the target region, the parameters that determine the current density distribution on the plane can be obtained through the conventional target field method. The current density is obtained from the partial derivative of the stream function, which can be represented by the combination of trigonometric functions. Optimization algorithms in mathematics are introduced into coil design to obtain the optimal current density distribution. A one-dimensional linear regression analysis was performed on the collected data, obtaining a coefficient of determination R2 of 0.9349 with a p-value of 0. This statistical result indicates a stable relationship between the peak-to-peak value (PPV) of the muscle magnetic field signal and the magnitude of grip strength. This system is expected to be a widely used tool for healthcare professionals to gain deeper insights into the muscle health of their patients.

Keywords: muscle magnetic signal, magnetic shielding, compensation coils, trigonometric functions.

Procedia PDF Downloads 56
30 Implementation of Green Deal Policies and Targets in Energy System Optimization Models: The TEMOA-Europe Case

Authors: Daniele Lerede, Gianvito Colucci, Matteo Nicoli, Laura Savoldi

Abstract:

The European Green Deal is the first internationally agreed set of measures to contrast climate change and environmental degradation. Besides the main target of reducing emissions by at least 55% by 2030, it sets the target of accompanying European countries through an energy transition to make the European Union into a modern, resource-efficient, and competitive net-zero emissions economy by 2050, decoupling growth from the use of resources and ensuring a fair adaptation of all social categories to the transformation process. While the general purpose to allow the realization of the purposes of the Green Deal already dates back to 2019, strategies and policies keep being developed coping with recent circumstances and achievements. However, general long-term measures like the Circular Economy Action Plan, the proposals to shift from fossil natural gas to renewable and low-carbon gases, in particular biomethane and hydrogen, and to end the sale of gasoline and diesel cars by 2035, will all have significant effects on energy supply and demand evolution across the next decades. The interactions between energy supply and demand over long-term time frames are usually assessed via energy system models to derive useful insights for policymaking and to address technological choices and research and development. TEMOA-Europe is a newly developed energy system optimization model instance based on the minimization of the total cost of the system under analysis, adopting a technologically integrated, detailed, and explicit formulation and considering the evolution of the system in partial equilibrium in competitive markets with perfect foresight. TEMOA-Europe is developed on the TEMOA platform, an open-source modeling framework totally implemented in Python, therefore ensuring third-party verification even on large and complex models. TEMOA-Europe is based on a single-region representation of the European Union and EFTA countries on a time scale between 2005 and 2100, relying on a set of assumptions for socio-economic developments based on projections by the International Energy Outlook and a large technological dataset including 7 sectors: the upstream and power sectors for the production of all energy commodities and the end-use sectors, including industry, transport, residential, commercial and agriculture. TEMOA-Europe also includes an updated hydrogen module considering its production, storage, transportation, and utilization. Besides, it can rely on a wide set of innovative technologies, ranging from nuclear fusion and electricity plants equipped with CCS in the power sector to electrolysis-based steel production processes and steel in the industrial sector – with a techno-economic characterization based on public literature – to produce insightful energy scenarios and especially to cope with the very long analyzed time scale. The aim of this work is to examine in detail the scheme of measures and policies for the realization of the purposes of the Green Deal and to transform them into a set of constraints and new socio-economic development pathways. Based on them, TEMOA-Europe will be used to produce and comparatively analyze scenarios to assess the consequences of Green Deal-related measures on the future evolution of the energy mix over the whole energy system in an economic optimization environment.

Keywords: European Green Deal, energy system optimization modeling, scenario analysis, TEMOA-Europe

Procedia PDF Downloads 105
29 Burkholderia Cepacia ST 767 Causing a Three Years Nosocomial Outbreak in a Hemodialysis Unit

Authors: Gousilin Leandra Rocha Da Silva, Stéfani T. A. Dantas, Bruna F. Rossi, Erika R. Bonsaglia, Ivana G. Castilho, Terue Sadatsune, Ary Fernandes Júnior, Vera l. M. Rall

Abstract:

Kidney failure causes decreased diuresis and accumulation of nitrogenous substances in the body. To increase patient survival, hemodialysis is used as a partial substitute for renal function. However, contamination of the water used in this treatment, causing bacteremia in patients, is a worldwide concern. The Burkholderia cepacia complex (Bcc), a group of bacteria with more than 20 species, is frequently isolated from hemodialysis water samples and comprises opportunistic bacteria, affecting immunosuppressed patients, due to its wide variety of virulence factors, in addition to innate resistance to several antimicrobial agents, contributing to the permanence in the hospital environment and to the pathogenesis in the host. The objective of the present work was to characterize molecularly and phenotypically Bcc isolates collected from the water and dialysate of the Hemodialysis Unit and from the blood of patients at a Public Hospital in Botucatu, São Paulo, Brazil, between 2019 and 2021. We used 33 Bcc isolates, previously obtained from blood cultures from patients with bacteremia undergoing hemodialysis treatment (2019-2021) and 24 isolates obtained from water and dialysate samples in a Hemodialysis Unit (same period). The recA gene was sequenced to identify the specific species among the Bcc group. All isolates were tested for the presence of some genes that encode virulence factors such as cblA, esmR, zmpA and zmpB. Considering the epidemiology of the outbreak, the Bcc isolates were molecularly characterized by Multi Locus Sequence Type (MLST) and by pulsed-field gel electrophoresis (PFGE). The verification and quantification of biofilm in a polystyrene microplate were performed by submitting the isolates to different incubation temperatures (20°C, average water temperature and 35°C, optimal temperature for group growth). The antibiogram was performed with disc diffusion tests on agar, using discs impregnated with cefepime (30µg), ceftazidime (30µg), ciprofloxacin (5µg), gentamicin (10µg), imipenem (10µg), amikacin 30µg), sulfametazol/trimethoprim (23.75/1.25µg) and ampicillin/sulbactam (10/10µg). The presence of ZmpB was identified in all isolates, while ZmpA was observed in 96.5% of the isolates, while none of them presented the cblA and esmR genes. The antibiogram of the 33 human isolates indicated that all were resistant to gentamicin, colistin, ampicillin/sulbactam and imipenem. 16 (48.5%) isolates were resistant to amikacin and lower rates of resistance were observed for meropenem, ceftazidime, cefepime, ciprofloxacin and piperacycline/tazobactam (6.1%). All isolates were sensitive to sulfametazol/trimethoprim, levofloxacin and tigecycline. As for the water isolates, resistance was observed only to gentamicin (34.8%) and imipenem (17.4%). According to PFGE results, all isolates obtained from humans and water belonged to the same pulsotype (1), which was identified by recA sequencing as B. cepacia¸, belonging to sequence type ST-767. By observing a single pulse type over three years, one can observe the persistence of this isolate in the pipeline, contaminating patients undergoing hemodialysis, despite the routine disinfection of water with peracetic acid. This persistence is probably due to the production of biofilm, which protects bacteria from disinfectants and, making this scenario more critical, several isolates proved to be multidrug-resistant (resistance to at least three groups of antimicrobials), turning the patient care even more difficult.

Keywords: hemodialysis, burkholderia cepacia, PFGE, MLST, multi drug resistance

Procedia PDF Downloads 99
28 Environmental Life Cycle Assessment of Circular, Bio-Based and Industrialized Building Envelope Systems

Authors: N. Cihan KayaçEtin, Stijn Verdoodt, Alexis Versele

Abstract:

The construction industry is accounted for one-third of all waste generated in the European Union (EU) countries. The Circular Economy Action Plan of the EU aims to tackle this issue and aspires to enhance the sustainability of the construction industry by adopting more circular principles and bio-based material use. The Interreg Circular Bio-Based Construction Industry (CBCI) project was conceived to research how this adoption can be facilitated. For this purpose, an approach is developed that integrates technical, legal and social aspects and provides business models for circular designing and building with bio-based materials. In the scope of the project, the research outputs are to be displayed in a real-life setting by constructing a demo terraced single-family house, the living lab (LL) located in Ghent (Belgium). The realization of the LL is conducted in a step-wise approach that includes iterative processes for design, description, criteria definition and multi-criteria assessment of building components. The essence of the research lies within the exploratory approach to the state-of-art building envelope and technical systems options for achieving an optimum combination for a circular and bio-based construction. For this purpose, nine preliminary designs (PD) for building envelope are generated, which consist of three basic construction methods: masonry, lightweight steel construction and wood framing construction supplemented with bio-based construction methods like cross-laminated timber (CLT) and massive wood framing. A comparative analysis on the PDs was conducted by utilizing several complementary tools to assess the circularity. This paper focuses on the life cycle assessment (LCA) approach for evaluating the environmental impact of the LL Ghent. The adoption of an LCA methodology was considered critical for providing a comprehensive set of environmental indicators. The PDs were developed at the component level, in particular for the (i) inclined roof, (ii-iii) front and side façade, (iv) internal walls and (v-vi) floors. The assessment was conducted on two levels; component and building level. The options for each component were compared at the first iteration and then, the PDs as an assembly of components were further analyzed. The LCA was based on a functional unit of one square meter of each component and CEN indicators were utilized for impact assessment for a reference study period of 60 years. A total of 54 building components that are composed of 31 distinct materials were evaluated in the study. The results indicate that wood framing construction supplemented with bio-based construction methods performs environmentally better than the masonry or steel-construction options. An analysis on the correlation between the total weight of components and environmental impact was also conducted. It was seen that masonry structures display a high environmental impact and weight, steel structures display low weight but relatively high environmental impact and wooden framing construction display low weight and environmental impact. The study provided valuable outputs in two levels: (i) several improvement options at component level with substitution of materials with critical weight and/or impact per unit, (ii) feedback on environmental performance for the decision-making process during the design phase of a circular single family house.

Keywords: circular and bio-based materials, comparative analysis, life cycle assessment (LCA), living lab

Procedia PDF Downloads 183
27 Impact of Water Interventions under WASH Program in the South-west Coastal Region of Bangladesh

Authors: S. M. Ashikur Elahee, Md. Zahidur Rahman, Md. Shofiqur Rahman

Abstract:

This study evaluated the impact of different water interventions under WASH program on access of household's to safe drinking water. Following survey method, the study was carried out in two Upazila of South-west coastal region of Bangladesh namely Koyra from Khulna and Shymnagar from Satkhira district. Being an explanatory study, a total of 200 household's selected applying random sampling technique were interviewed using a structured interview schedule. The predicted probability suggests that around 62 percent household's are out of year-round access to safe drinking water whereby, only 25 percent household's have access at SPHERE standard (913 Liters/per person/per year). Besides, majority (78 percent) of the household's have not accessed at both indicators simultaneously. The distance from household residence to the water source varies from 0 to 25 kilometer with an average distance of 2.03 kilometers. The study also reveals that the increase in monthly income around BDT 1,000 leads to additional 11 liters (coefficient 0.01 at p < 0.1) consumption of safe drinking water for a person/year. As expected, lining up time has significant negative relationship with dependent variables i.e., for higher lining up time, the probability of getting access for both SPHERE standard and year round access variables becomes lower. According to ordinary least square (OLS) regression results, water consumption decreases at 93 liters for per person/year of a household if one member is added to that household. Regarding water consumption intensity, ordered logistic regression (OLR) model shows that one-minute increase of lining up time for water collection tends to reduce water consumption intensity. On the other hand, as per OLS regression results, for one-minute increase of lining up time, the water consumption decreases by around 8 liters. Considering access to Deep Tube Well (DTW) as a reference dummy, in OLR, the household under Pond Sand Filter (PSF), Shallow Tube Well (STW), Reverse Osmosis (RO) and Rainwater Harvester System (RWHS) are respectively 37 percent, 29 percent, 61 percent and 27 percent less likely to ensure year round access of water consumption. In line of health impact, different type of water born diseases like diarrhea, cholera, and typhoid are common among the coastal community caused by microbial impurities i.e., Bacteria, Protozoa. High turbidity and TDS in pond water caused by reduction of water depth, presence of suspended particle and inorganic salt stimulate the growth of bacteria, protozoa, and algae causes affecting health hazard. Meanwhile, excessive growth of Algae in pond water caused by excessive nitrate in drinking water adversely effects on child health. In lieu of ensuring access at SPHERE standard, we need to increase the number of water interventions at reasonable distance, preferably a half kilometer away from the dwelling place, ensuring community peoples involved with its installation process where collectively owned water intervention is found more effective than privately owned. In addition, a demand-responsive approach to supply of piped water should be adopted to allow consumer demand to guide investment in domestic water supply in future.

Keywords: access, impact, safe drinking water, Sphere standard, water interventions

Procedia PDF Downloads 219
26 Best Practices and Recommendations for CFD Simulation of Hydraulic Spool Valves

Authors: Jérémy Philippe, Lucien Baldas, Batoul Attar, Jean-Charles Mare

Abstract:

The proposed communication deals with the research and development of a rotary direct-drive servo valve for aerospace applications. A key challenge of the project is to downsize the electromagnetic torque motor by reducing the torque required to drive the rotary spool. It is intended to optimize the spool and the sleeve geometries by combining a Computational Fluid Dynamics (CFD) approach with commercial optimization software. The present communication addresses an important phase of the project, which consists firstly of gaining confidence in the simulation results. It is well known that the force needed to pilot a sliding spool valve comes from several physical effects: hydraulic forces, friction and inertia/mass of the moving assembly. Among them, the flow force is usually a major contributor to the steady-state (or Root Mean Square) driving torque. In recent decades, CFD has gradually become a standard simulation tool for studying fluid-structure interactions. However, in the particular case of high-pressure valve design, the authors have experienced that the calculated overall hydraulic force depends on the parameterization and options used to build and run the CFD model. To solve this issue, the authors have selected the standard case of the linear spool valve, which is addressed in detail in numerous scientific references (analytical models, experiments, CFD simulations). The first CFD simulations run by the authors have shown that the evolution of the equivalent discharge coefficient vs. Reynolds number at the metering orifice corresponds well to the values that can be predicted by the classical analytical models. Oppositely, the simulated flow force was found to be quite different from the value calculated analytically. This drove the authors to investigate minutely the influence of the studied domain and the setting of the CFD simulation. It was firstly shown that the flow recirculates in the inlet and outlet channels if their length is not sufficient regarding their hydraulic diameter. The dead volume on the uncontrolled orifice side also plays a significant role. These examples highlight the influence of the geometry of the fluid domain considered. The second action was to investigate the influence of the type of mesh, the turbulence models and near-wall approaches, and the numerical solver and discretization scheme order. Two approaches were used to determine the overall hydraulic force acting on the moving spool. First, the force was deduced from the momentum balance on a control domain delimited by the valve inlet and outlet and the spool walls. Second, the overall hydraulic force was calculated from the integral of pressure and shear forces acting at the boundaries of the fluid domain. This underlined the significant contribution of the viscous forces acting on the spool between the inlet and outlet orifices, which are generally not considered in the literature. This also emphasized the influence of the choices made for the implementation of CFD calculation and results analysis. With the step-by-step process adopted to increase confidence in the CFD simulations, the authors propose a set of best practices and recommendations for the efficient use of CFD to design high-pressure spool valves.

Keywords: computational fluid dynamics, hydraulic forces, servovalve, rotary servovalve

Procedia PDF Downloads 43
25 The Use of Modern Technologies and Computers in the Archaeological Surveys of Sistan in Eastern Iran

Authors: Mahyar MehrAfarin

Abstract:

The Sistan region in eastern Iran is a significant archaeological area in Iran and the Middle East, encompassing 10,000 square kilometers. Previous archeological field surveys have identified 1662 ancient sites dating from prehistoric periods to the Islamic period. Research Aim: This article aims to explore the utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, and the benefits derived from their implementation. Methodology: The research employs a descriptive-analytical approach combined with field methods. New technologies and software, such as GPS, drones, magnetometers, equipped cameras, satellite images, and software programs like GIS, Map source, and Excel, were utilized to collect information and analyze data. Findings: The use of modern technologies and computers in archaeological field surveys proved to be essential. Traditional archaeological activities, such as excavation and field surveys, are time-consuming and costly. Employing modern technologies helps in preserving ancient sites, accurately recording archaeological data, reducing errors and mistakes, and facilitating correct and accurate analysis. Creating a comprehensive and accessible database, generating statistics, and producing graphic designs and diagrams are additional advantages derived from the use of efficient technologies in archaeology. Theoretical Importance: The integration of computers and modern technologies in archaeology contributes to interdisciplinary collaborations and facilitates the involvement of specialists from various fields, such as geography, history, art history, anthropology, laboratory sciences, and computer engineering. The utilization of computers in archaeology spanned across diverse areas, including database creation, statistical analysis, graphics implementation, laboratory and engineering applications, and even artificial intelligence, which remains an unexplored area in Iranian archaeology. Data Collection and Analysis Procedures: Information was collected using modern technologies and software, capturing geographic coordinates, aerial images, archeogeophysical data, and satellite images. This data was then inputted into various software programs for analysis, including GIS, Map source, and Excel. The research employed both descriptive and analytical methods to present findings effectively. Question Addressed: The primary question addressed in this research is how the use of modern technologies and computers in archeological field surveys in Sistan, Iran, can enhance archaeological data collection, preservation, analysis, and accessibility. Conclusion: The utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, has proven to be necessary and beneficial. These technologies aid in preserving ancient sites, accurately recording archaeological data, reducing errors, and facilitating comprehensive analysis. The creation of accessible databases, statistics generation, graphic designs, and interdisciplinary collaborations are further advantages observed. It is recommended to explore the potential of artificial intelligence in Iranian archaeology as an unexplored area. The research has implications for cultural heritage organizations, archaeology students, and universities involved in archaeological field surveys in Sistan and Baluchistan province. Additionally, it contributes to enhancing the understanding and preservation of Iran's archaeological heritage.

Keywords: Iran, sistan, archaeological surveys, computer use, modern technologies

Procedia PDF Downloads 79
24 Pre-Cancerigene Injuries Related to Human Papillomavirus: Importance of Cervicography as a Complementary Diagnosis Method

Authors: Denise De Fátima Fernandes Barbosa, Tyane Mayara Ferreira Oliveira, Diego Jorge Maia Lima, Paula Renata Amorim Lessa, Ana Karina Bezerra Pinheiro, Cintia Gondim Pereira Calou, Glauberto Da Silva Quirino, Hellen Lívia Oliveira Catunda, Tatiana Gomes Guedes, Nicolau Da Costa

Abstract:

The aim of this study is to evaluate the use of Digital Cervicography (DC) in the diagnosis of precancerous lesions related to Human Papillomavirus (HPV). Cross-sectional study with a quantitative approach, of evaluative type, held in a health unit linked to the Pro Dean of Extension of the Federal University of Ceará, in the period of July to August 2015 with a sample of 33 women. Data collecting was conducted through interviews with enforcement tool. Franco (2005) standardized the technique used for DC. Polymerase Chain Reaction (PCR) was performed to identify high-risk HPV genotypes. DC were evaluated and classified by 3 judges. The results of DC and PCR were classified as positive, negative or inconclusive. The data of the collecting instruments were compiled and analyzed by the software Statistical Package for Social Sciences (SPSS) with descriptive statistics and cross-references. Sociodemographic, sexual and reproductive variables were analyzed through absolute frequencies (N) and their respective percentage (%). Kappa coefficient (κ) was applied to determine the existence of agreement between the DC of reports among evaluators with PCR and also among the judges about the DC results. The Pearson's chi-square test was used for analysis of sociodemographic, sexual and reproductive variables with the PCR reports. It was considered statistically significant (p<0.05). Ethical aspects of research involving human beings were respected, according to 466/2012 Resolution. Regarding the socio-demographic profile, the most prevalent ages and equally were those belonging to the groups 21-30 and 41-50 years old (24.2%). The brown color was reported in excess (84.8%) and 96.9% out of them had completed primary and secondary school or studying. 51.5% were married, 72.7% Catholic, 54.5% employed and 48.5% with income between one and two minimum wages. As for the sexual and reproductive characteristics, prevailed heterosexual (93.9%) who did not use condoms during sexual intercourse (72.7%). 51.5% had a previous history of Sexually Transmitted Infection (STI), and HPV the most prevalent STI (76.5%). 57.6% did not use contraception, 78.8% underwent examination Cancer Prevention Uterus (PCCU) with shorter time interval or equal to one year, 72.7% had no cases of Cervical Cancer in the family, 63.6% were multiparous and 97% were not vaccinated against HPV. DC identified good level of agreement between raters (κ=0.542), had a specificity of 77.8% and sensitivity of 25% when compared their results with PCR. Only the variable race showed a statistically significant association with CRP (p=0.042). DC had 100% acceptance amongst women in the sample, revealing the possibility of other experiments in using this method so that it proves as a viable technique. The DC positivity criteria were developed by nurses and these professionals also perform PCCU in Brazil, which means that DC can be an important complementary diagnostic method for the appreciation of these professional’s quality of examinations.

Keywords: gynecological examination, human papillomavirus, nursing, papillomavirus infections, uterine lasmsneop

Procedia PDF Downloads 300
23 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction

Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal

Abstract:

Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.

Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction

Procedia PDF Downloads 139
22 Consumer Preferences for Low-Carbon Futures: A Structural Equation Model Based on the Domestic Hydrogen Acceptance Framework

Authors: Joel A. Gordon, Nazmiye Balta-Ozkan, Seyed Ali Nabavi

Abstract:

Hydrogen-fueled technologies are rapidly advancing as a critical component of the low-carbon energy transition. In countries historically reliant on natural gas for home heating, such as the UK, hydrogen may prove fundamental for decarbonizing the residential sector, alongside other technologies such as heat pumps and district heat networks. While the UK government is set to take a long-term policy decision on the role of domestic hydrogen by 2026, there are considerable uncertainties regarding consumer preferences for ‘hydrogen homes’ (i.e., hydrogen-fueled appliances for space heating, hot water, and cooking. In comparison to other hydrogen energy technologies, such as road transport applications, to date, few studies have engaged with the social acceptance aspects of the domestic hydrogen transition, resulting in a stark knowledge deficit and pronounced risk to policymaking efforts. In response, this study aims to safeguard against undesirable policy measures by revealing the underlying relationships between the factors of domestic hydrogen acceptance and their respective dimensions: attitudinal, socio-political, community, market, and behavioral acceptance. The study employs an online survey (n=~2100) to gauge how different UK householders perceive the proposition of switching from natural gas to hydrogen-fueled appliances. In addition to accounting for housing characteristics (i.e., housing tenure, property type and number of occupants per dwelling) and several other socio-structural variables (e.g. age, gender, and location), the study explores the impacts of consumer heterogeneity on hydrogen acceptance by recruiting respondents from across five distinct groups: (1) fuel poor householders, (2) technology engaged householders, (3) environmentally engaged householders, (4) technology and environmentally engaged householders, and (5) a baseline group (n=~700) which filters out each of the smaller targeted groups (n=~350). This research design reflects the notion that supporting a socially fair and efficient transition to hydrogen will require parallel engagement with potential early adopters and demographic groups impacted by fuel poverty while also accounting strongly for public attitudes towards net zero. Employing a second-order multigroup confirmatory factor analysis (CFA) in Mplus, the proposed hydrogen acceptance model is tested to fit the data through a partial least squares (PLS) approach. In addition to testing differences between and within groups, the findings provide policymakers with critical insights regarding the significance of knowledge and awareness, safety perceptions, perceived community impacts, cost factors, and trust in key actors and stakeholders as potential explanatory factors of hydrogen acceptance. Preliminary results suggest that knowledge and awareness of hydrogen are positively associated with support for domestic hydrogen at the household, community, and national levels. However, with the exception of technology and/or environmentally engaged citizens, much of the population remains unfamiliar with hydrogen and somewhat skeptical of its application in homes. Knowledge and awareness present as critical to facilitating positive safety perceptions, alongside higher levels of trust and more favorable expectations for community benefits, appliance performance, and potential cost savings. Based on these preliminary findings, policymakers should be put on red alert about diffusing hydrogen into the public consciousness in alignment with energy security, fuel poverty, and net-zero agendas.

Keywords: hydrogen homes, social acceptance, consumer heterogeneity, heat decarbonization

Procedia PDF Downloads 114
21 Impact of Six-Minute Walk or Rest Break during Extended GamePlay on Executive Function in First Person Shooter Esport Players

Authors: Joanne DiFrancisco-Donoghue, Seth E. Jenny, Peter C. Douris, Sophia Ahmad, Kyle Yuen, Hillary Gan, Kenney Abraham, Amber Sousa

Abstract:

Background: Guidelines for the maintenance of health of esports players and the cognitive changes that accompany competitive gaming are understudied. Executive functioning is an important cognitive skill for an esports player. The relationship between executive functions and physical exercise has been well established. However, the effects of prolonged sitting regardless of physical activity level have not been established. Prolonged uninterrupted sitting reduces cerebral blood flow. Reduced cerebral blood flow is associated with lower cognitive function and fatigue. This decrease in cerebral blood flow has been shown to be offset by frequent and short walking breaks. These short breaks can be as little as 2 minutes at the 30-minute mark and 6 minutes following 60 minutes of prolonged sitting. The rationale is the increase in blood flow and the positive effects this has on metabolic responses. The primary purpose of this study was to evaluate executive function changes following 6-minute bouts of walking and complete rest mid-session, compared to no break, during prolonged gameplay in competitive first-person shooter (FPS) esports players. Methods: This study was conducted virtually due to the Covid-19 pandemic and was approved by the New York Institute of Technology IRB. Twelve competitive FPS participants signed written consent to participate in this randomized pilot study. All participants held a gold ranking or higher. Participants were asked to play for 2 hours on three separate days. Outcome measures to test executive function included the Color Stroop and the Tower of London tests which were administered online each day prior to gaming and at the completion of gaming. All participants completed the tests prior to testing for familiarization. One day of testing consisted of a 6-minute walk break after 60-75 minutes of play. The Rate of Perceived Exertion (RPE) was recorded. The participant continued to play for another 60-75 minutes and completed the tests again. Another day the participants repeated the same methods replacing the 6-minute walk with lying down and resting for 6 minutes. On the last day, the participant played continuously with no break for 2 hours and repeated the outcome tests pre and post-play. A Latin square was used to randomize the treatment order. Results: Using descriptive statistics, the largest change in mean reaction time incorrect congruent pre to post play was seen following the 6-minute walk (662.0 (609.6) ms pre to 602.8 (539.2) ms post), followed by the 6-minute rest group (681.7(618.1) ms pre to 666.3 (607.9) ms post), and with minimal change in the continuous group (594.0(534.1) ms pre to 589.6(552.9) ms post). The mean solution time was fastest in the resting condition (7774.6(6302.8)ms), followed by the walk condition (7929.4 (5992.8)ms), with the continuous condition being slowest (9337.3(7228.7)ms). the continuous group 9337.3(7228.7) ms; 7929.4 (5992.8 ) ms 774.6(6302.8) ms. Conclusion: Short walking breaks improve blood flow and reduce the risk of venous thromboembolism during prolonged sitting. This pilot study demonstrated that a low intensity 6 -minute walk break, following 60 minutes of play, may also improve executive function in FPS gamers.

Keywords: executive function, FPS, physical activity, prolonged sitting

Procedia PDF Downloads 228
20 Analysis of Vibration and Shock Levels during Transport and Handling of Bananas within the Post-Harvest Supply Chain in Australia

Authors: Indika Fernando, Jiangang Fei, Roger Stanley, Hossein Enshaei

Abstract:

Delicate produce such as fresh fruits are increasingly susceptible to physiological damage during the essential post-harvest operations such as transport and handling. Vibration and shock during the distribution are identified factors for produce damage within post-harvest supply chains. Mechanical damages caused during transit may significantly diminish the quality of fresh produce which may also result in a substantial wastage. Bananas are one of the staple fruit crops and the most sold supermarket produce in Australia. It is also the largest horticultural industry in the state of Queensland where 95% of the total production of bananas are cultivated. This results in significantly lengthy interstate supply chains where fruits are exposed to prolonged vibration and shocks. This paper is focused on determining the shock and vibration levels experienced by packaged bananas during transit from the farm gate to the retail market. Tri-axis acceleration data were captured by custom made accelerometer based data loggers which were set to a predetermined sampling rate of 400 Hz. The devices recorded data continuously for 96 Hours in the interstate journey of nearly 3000 Km from the growing fields in far north Queensland to the central distribution centre in Melbourne in Victoria. After the bananas were ripened at the ripening facility in Melbourne, the data loggers were used to capture the transport and handling conditions from the central distribution centre to three retail outlets within the outskirts of Melbourne. The quality of bananas were assessed before and after transport at each location along the supply chain. Time series vibration and shock data were used to determine the frequency and the severity of the transient shocks experienced by the packages. Frequency spectrogram was generated to determine the dominant frequencies within each segment of the post-harvest supply chain. Root Mean Square (RMS) acceleration levels were calculated to characterise the vibration intensity during transport. Data were further analysed by Fast Fourier Transform (FFT) and the Power Spectral Density (PSD) profiles were generated to determine the critical frequency ranges. It revealed the frequency range in which the escalated energy levels were transferred to the packages. It was found that the vertical vibration was the highest and the acceleration levels mostly oscillated between ± 1g during transport. Several shock responses were recorded exceeding this range which were mostly attributed to package handling. These detrimental high impact shocks may eventually lead to mechanical damages in bananas such as impact bruising, compression bruising and neck injuries which affect their freshness and visual quality. It was revealed that the frequency range between 0-5 Hz and 15-20 Hz exert an escalated level of vibration energy to the packaged bananas which may result in abrasion damages such as scuffing, fruit rub and blackened rub. Further research is indicated specially in the identified critical frequency ranges to minimise exposure of fruits to the harmful effects of vibration. Improving the handling conditions and also further study on package failure mechanisms when exposed to transient shock excitation will be crucial to improve the visual quality of bananas within the post-harvest supply chain in Australia.

Keywords: bananas, handling, post-harvest, supply chain, shocks, transport, vibration

Procedia PDF Downloads 190
19 The Multiplier Effects of Intelligent Transport System to Nigerian Economy

Authors: Festus Okotie

Abstract:

Nigeria is the giant of Africa with great and diverse transport potentials yet to be fully tapped into and explored.it is the most populated nation in Africa with nearly 200 million people, the sixth largest oil producer overall and largest oil producer in Africa with proven oil and gas reserves of 37 billion barrels and 192 trillion cubic feet, over 300 square kilometers of arable land and significant deposits of largely untapped minerals. A world bank indicator which measures trading across border ranked Nigeria at 183 out of 185 countries in 2017 and although different governments in the past made efforts through different interventions such as 2007 ports reforms led by Ngozi Okonjo-Iweala, a former minister of Finance and world bank managing director also attempted to resolve some of the challenges such as infrastructure shortcomings, policy and regulatory inconsistencies, overlapping functions and duplicated roles among the different MDA’S. It is one of the fundamental structures smart nations and cities are using to improve the living conditions of its citizens and achieving sustainability. Examples of some of its benefits includes tracking high pedestrian areas, traffic patterns, railway stations, planning and scheduling bus times, it also enhances interoperability, creates alerts of transport situation and has swift capacity to share information among the different platforms and transport modes. It also offers a comprehensive approach to risk management, putting emergency procedures and response capabilities in place, identifying dangers, including vandalism or violence, fare evasion, and medical emergencies. The Nigerian transport system is urgently in need of modern infrastructures such as ITS. Smart city transport technology helps cities to function productively, while improving services for businesses and lives of is citizens. This technology has the ability to improve travel across traditional modes of transport, such as cars and buses, with immediate benefits for city dwellers and also helps in managing transport systems such as dangerous weather conditions, heavy traffic, and unsafe speeds which can result in accidents and loss of lives. Intelligent transportation systems help in traffic control such as permitting traffic lights to react to changing traffic patterns, instead of working on a fixed schedule in traffic. Intelligent transportation systems is very important in Nigeria’s transportation sector and so would require trained personnel to drive its efficiency to greater height because the purpose of introducing it is to add value and at the same time reduce motor vehicle miles and traffic congestion which is a major challenge around Tin can island and Apapa Port, a major transportation hub in Nigeria. The need for the federal government, state governments, houses of assembly to organise a national transportation workshop to begin the process of addressing the challenges in our nation’s transport sector is highly expedient and so bills that will facilitate the implementation of policies to promote intelligent transportation systems needs to be sponsored because of its potentials to create thousands of jobs for our citizens, provide farmers with better access to cities and a better living condition for Nigerians.

Keywords: intelligent, transport, system, Nigeria

Procedia PDF Downloads 116
18 Raman Spectral Fingerprints of Healthy and Cancerous Human Colorectal Tissues

Authors: Maria Karnachoriti, Ellas Spyratou, Dimitrios Lykidis, Maria Lambropoulou, Yiannis S. Raptis, Ioannis Seimenis, Efstathios P. Efstathopoulos, Athanassios G. Kontos

Abstract:

Colorectal cancer is the third most common cancer diagnosed in Europe, according to the latest incidence data provided by the World Health Organization (WHO), and early diagnosis has proved to be the key in reducing cancer-related mortality. In cases where surgical interventions are required for cancer treatment, the accurate discrimination between healthy and cancerous tissues is critical for the postoperative care of the patient. The current study focuses on the ex vivo handling of surgically excised colorectal specimens and the acquisition of their spectral fingerprints using Raman spectroscopy. Acquired data were analyzed in an effort to discriminate, in microscopic scale, between healthy and malignant margins. Raman spectroscopy is a spectroscopic technique with high detection sensitivity and spatial resolution of few micrometers. The spectral fingerprint which is produced during laser-tissue interaction is unique and characterizes the biostructure and its inflammatory or cancer state. Numerous published studies have demonstrated the potential of the technique as a tool for the discrimination between healthy and malignant tissues/cells either ex vivo or in vivo. However, the handling of the excised human specimens and the Raman measurement conditions remain challenging, unavoidably affecting measurement reliability and repeatability, as well as the technique’s overall accuracy and sensitivity. Therefore, tissue handling has to be optimized and standardized to ensure preservation of cell integrity and hydration level. Various strategies have been implemented in the past, including the use of balanced salt solutions, small humidifiers or pump-reservoir-pipette systems. In the current study, human colorectal specimens of 10X5 mm were collected from 5 patients up to now who underwent open surgery for colorectal cancer. A novel, non-toxic zinc-based fixative (Z7) was used for tissue preservation. Z7 demonstrates excellent protein preservation and protection against tissue autolysis. Micro-Raman spectra were recorded with a Renishaw Invia spectrometer from successive random 2 micrometers spots upon excitation at 785 nm to decrease fluorescent background and secure avoidance of tissue photodegradation. A temperature-controlled approach was adopted to stabilize the tissue at 2 °C, thus minimizing dehydration effects and consequent focus drift during measurement. A broad spectral range, 500-3200 cm-1,was covered with five consecutive full scans that lasted for 20 minutes in total. The average spectra were used for least square fitting analysis of the Raman modes.Subtle Raman differences were observed between normal and cancerous colorectal tissues mainly in the intensities of the 1556 cm-1 and 1628 cm-1 Raman modes which correspond to v(C=C) vibrations in porphyrins, as well as in the range of 2800-3000 cm-1 due to CH2 stretching of lipids and CH3 stretching of proteins. Raman spectra evaluation was supported by histological findings from twin specimens. This study demonstrates that Raman spectroscopy may constitute a promising tool for real-time verification of clear margins in colorectal cancer open surgery.

Keywords: colorectal cancer, Raman spectroscopy, malignant margins, spectral fingerprints

Procedia PDF Downloads 91
17 Use of computer and peripherals in the Archaeological Surveys of Sistan in Eastern Iran

Authors: Mahyar Mehrafarin, Reza Mehrafarin

Abstract:

The Sistan region in eastern Iran is a significant archaeological area in Iran and the Middle East, encompassing 10,000 square kilometers. Previous archeological field surveys have identified 1662 ancient sites dating from prehistoric periods to the Islamic period. Research Aim: This article aims to explore the utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, and the benefits derived from their implementation. Methodology: The research employs a descriptive-analytical approach combined with field methods. New technologies and software, such as GPS, drones, magnetometers, equipped cameras, satellite images, and software programs like GIS, Map source, and Excel, were utilized to collect information and analyze data. Findings: The use of modern technologies and computers in archaeological field surveys proved to be essential. Traditional archaeological activities, such as excavation and field surveys, are time-consuming and costly. Employing modern technologies helps in preserving ancient sites, accurately recording archaeological data, reducing errors and mistakes, and facilitating correct and accurate analysis. Creating a comprehensive and accessible database, generating statistics, and producing graphic designs and diagrams are additional advantages derived from the use of efficient technologies in archaeology. Theoretical Importance: The integration of computers and modern technologies in archaeology contributes to interdisciplinary collaborations and facilitates the involvement of specialists from various fields, such as geography, history, art history, anthropology, laboratory sciences, and computer engineering. The utilization of computers in archaeology spanned across diverse areas, including database creation, statistical analysis, graphics implementation, laboratory and engineering applications, and even artificial intelligence, which remains an unexplored area in Iranian archaeology. Data Collection and Analysis Procedures: Information was collected using modern technologies and software, capturing geographic coordinates, aerial images, archeogeophysical data, and satellite images. This data was then inputted into various software programs for analysis, including GIS, Map source, and Excel. The research employed both descriptive and analytical methods to present findings effectively. Question Addressed: The primary question addressed in this research is how the use of modern technologies and computers in archeological field surveys in Sistan, Iran, can enhance archaeological data collection, preservation, analysis, and accessibility. Conclusion: The utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, has proven to be necessary and beneficial. These technologies aid in preserving ancient sites, accurately recording archaeological data, reducing errors, and facilitating comprehensive analysis. The creation of accessible databases, statistics generation, graphic designs, and interdisciplinary collaborations are further advantages observed. It is recommended to explore the potential of artificial intelligence in Iranian archaeology as an unexplored area. The research has implications for cultural heritage organizations, archaeology students, and universities involved in archaeological field surveys in Sistan and Baluchistan province. Additionally, it contributes to enhancing the understanding and preservation of Iran's archaeological heritage.

Keywords: archaeological surveys, computer use, iran, modern technologies, sistan

Procedia PDF Downloads 78
16 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus

Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert

Abstract:

Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.

Keywords: building information modeling, digital terrain model, existing buildings, interoperability

Procedia PDF Downloads 112
15 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms

Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli

Abstract:

Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.

Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning

Procedia PDF Downloads 73
14 A Descriptive Study on Water Scarcity as a One Health Challenge among the Osiram Community, Kajiado County, Kenya

Authors: Damiano Omari, Topirian Kerempe, Dibo Sama, Walter Wafula, Sharon Chepkoech, Chrispine Juma, Gilbert Kirui, Simon Mburu, Susan Keino

Abstract:

The One Health concept was officially adopted by the international organizations and scholarly bodies in 1984. It aims at combining human, animal and environmental components to address global health challenges. Using collaborative efforts optimal health to people, animals, and the environment can be achieved. One health approach plays a significant approach role in prevention and control of zoonosis diseases. It has also been noted that 75% of new emerging human infectious diseases are zoonotic. In Kenya, one health has been embraced and strongly advocated for by One Health East and Central Africa (OHCEA). It was inaugurated on 17th of October 2010 at a historic meeting facilitated by USAID with participants from 7 public health schools, seven faculties of veterinary medicine in Eastern Africa and 2 American universities (Tufts and University of Minnesota) in addition to respond project staff. The study was conducted in Loitoktok Sub County, specifically in the Amboseli Ecosystem. The Amboseli ecosystem covers an area of 5,700 square kilometers and stretches between Mt. Kilimanjaro, Chyulu Hills, Tsavo West National park and the Kenya/Tanzania border. The area is arid to semi-arid and is more suitable for pastoralism with a high potential for conservation of wildlife and tourism enterprises. The ecosystem consists of the Amboseli National Park, which is surrounded by six group ranches which include Kimana, Olgulului, Selengei, Mbirikani, Kuku and Rombo in Loitoktok District. The Manyatta of study was Osiram Cultural Manyatta in Mbirikani group ranch. Apart from visiting the Manyatta, we also visited the sub-county hospital, slaughter slab, forest service, Kimana market, and the Amboseli National Park. The aim of the study was to identify the one health issues facing the community. This was done by a conducting a community needs assessment and prioritization. Different methods were used in data collection for the qualitative and numerical data. They include among others; key informant interviews and focus group discussions. We also guided the community members in drawing their Resource Map this helped identify the major resources in their land and also help them identify some of the issues they were facing. Matrix piling, root cause analysis, and force field analysis tools were used to establish the one health related priority issues facing community members. Skits were also used to present to the community interventions to the major one health issues. Some of the prioritized needs among the community were water scarcity and inadequate markets for their beadwork. The group intervened on the various needs of the Manyatta. For water scarcity, we educated the community on water harvesting methods using gutters as well as proper storage by the use of tanks and earth dams. The community was also encouraged to recycle and conserve water. To improve markets; we educated the community to upload their products online, a page was opened for them and uploading the photos was demonstrated to them. They were also encouraged to be innovative to attract more clients.

Keywords: Amboseli ecosystem, community interventions, community needs assessment and prioritization, one health issues

Procedia PDF Downloads 170
13 Modelling Farmer’s Perception and Intention to Join Cashew Marketing Cooperatives: An Expanded Version of the Theory of Planned Behaviour

Authors: Gospel Iyioku, Jana Mazancova, Jiri Hejkrlik

Abstract:

The “Agricultural Promotion Policy (2016–2020)” represents a strategic initiative by the Nigerian government to address domestic food shortages and the challenges in exporting products at the required quality standards. Hindered by an inefficient system for setting and enforcing food quality standards, coupled with a lack of market knowledge, the Federal Ministry of Agriculture and Rural Development (FMARD) aims to enhance support for the production and activities of key crops like cashew. By collaborating with farmers, processors, investors, and stakeholders in the cashew sector, the policy seeks to define and uphold high-quality standards across the cashew value chain. Given the challenges and opportunities faced by Nigerian cashew farmers, active participation in cashew marketing groups becomes imperative. These groups serve as essential platforms for farmers to collectively navigate market intricacies, access resources, share knowledge, improve output quality, and bolster their overall bargaining power. Through engagement in these cooperative initiatives, farmers not only boost their economic prospects but can also contribute significantly to the sustainable growth of the cashew industry, fostering resilience and community development. This study explores the perceptions and intentions of farmers regarding their involvement in cashew marketing cooperatives, utilizing an expanded version of the Theory of Planned Behaviour. Drawing insights from a diverse sample of 321 cashew farmers in Southwest Nigeria, the research sheds light on the factors influencing decision-making in cooperative participation. The demographic analysis reveals a diverse landscape, with a substantial presence of middle-aged individuals contributing significantly to the agricultural sector and cashew-related activities emerging as a primary income source for a substantial proportion (23.99%). Employing Structural Equation Modelling (SEM) with Maximum Likelihood Robust (MLR) estimation in R, the research elucidates the associations among latent variables. Despite the model’s complexity, the goodness-of-fit indices attest to the validity of the structural model, explaining approximately 40% of the variance in the intention to join cooperatives. Moral norms emerge as a pivotal construct, highlighting the profound influence of ethical considerations in decision-making processes, while perceived behavioural control presents potential challenges in active participation. Attitudes toward joining cooperatives reveal nuanced perspectives, with strong beliefs in enhanced connections with other farmers but varying perceptions on improved access to essential information. The SEM analysis establishes positive and significant effects of moral norms, perceived behavioural control, subjective norms, and attitudes on farmers’ intention to join cooperatives. The knowledge construct positively affects key factors influencing intention, emphasizing the importance of informed decision-making. A supplementary analysis using partial least squares (PLS) SEM corroborates the robustness of our findings, aligning with covariance-based SEM results. This research unveils the determinants of cooperative participation and provides valuable insights for policymakers and practitioners aiming to empower and support this vital demographic in the cashew industry.

Keywords: marketing cooperatives, theory of planned behaviour, structural equation modelling, cashew farmers

Procedia PDF Downloads 84
12 Pisolite Type Azurite/Malachite Ore in Sandstones at the Base of the Miocene in Northern Sardinia: The Authigenic Hypothesis

Authors: S. Fadda, M. Fiori, C. Matzuzzi

Abstract:

Mineralized formations in the bottom sediments of a Miocene transgression have been discovered in Sardinia. The mineral assemblage consists of copper sulphides and oxidates suggesting fluctuations of redox conditions in neutral to high-pH restricted shallow-water coastal basins. Azurite/malachite has been observed as authigenic and occurs as loose spheroidal crystalline particles associated with the transitional-littoral horizon forming the bottom of the marine transgression. Many field observations are consistent with a supergenic circulation of metals involving terrestrial groundwater-seawater mixing. Both clastic materials and metals come from Tertiary volcanic edifices while the main precipitating anions, carbonates, and sulphides species are of both continental and marine origin. Formation of Cu carbonates as a supergene secondary 'oxide' assemblage, does not agree with field evidences, petrographic observations along with textural evidences in the host-rock types. Samples were collected along the sedimentary sequence for different analyses: the majority of elements were determined by X-ray fluorescence and plasma-atomic emission spectroscopy. Mineral identification was obtained by X-ray diffractometry and scanning electron microprobe. Thin sections of the samples were examined in microscopy while porosity measurements were made using a mercury intrusion porosimeter. Cu-carbonates deposited at a temperature below 100 C° which is consistent with the clay minerals in the matrix of the host rock dominated by illite and montmorillonite. Azurite nodules grew during the early diagenetic stage through reaction of cupriferous solutions with CO₂ imported from the overlying groundwater and circulating through the sandstones during shallow burial. Decomposition of organic matter in the bottom anoxic waters released additional carbon dioxide to pore fluids for azurite stability. In this manner localized reducing environments were also generated in which Cu was fixed as Cu-sulphide and sulphosalts. Microscopic examinations of textural features of azurite nodules give evidence of primary malachite/azurite deposition rather than supergene oxidation in place of primary sulfides. Photomicrographs show nuclei of azurite and malachite surrounded by newly formed microcrystalline carbonates which constitute the matrix. The typical pleochroism of crystals can be observed also when this mineral fills microscopic fissures or cracks. Sedimentological evidence of transgression and regression indicates that the pore water would have been a variable mixture of marine water and groundwaters with a possible meteoric component in an alternatively exposed and subaqueous environment owing to water-level fluctuation. Salinity data of the pore fluids, assessed at random intervals along the mineralised strata confirmed the values between about 7000 and 30,000 ppm measured in coeval sediments at the base of Miocene falling in the range of a more or less diluted sea water. This suggests a variation in mean pore-fluids pH between 5.5 and 8.5, compatible with the oxidized and reduced mineral paragenesis described in this work. The results of stable isotopes studies reflect the marine transgressive-regressive cyclicity of events and are compatibile with carbon derivation from sea water. During the last oxidative stage of diagenesis, under surface conditions of higher activity of H₂O and O₂, CO₂ partial pressure decreased, and malachite becomes the stable Cu mineral. The potential for these small but high grade deposits does exist.

Keywords: sedimentary, Cu-carbonates, authigenic, tertiary, Sardinia

Procedia PDF Downloads 131