Search results for: partial lease square
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2855

Search results for: partial lease square

35 Correlation of Clinical and Sonographic Findings with Cytohistology for Diagnosis of Ovarian Tumours

Authors: Meenakshi Barsaul Chauhan, Aastha Chauhan, Shilpa Hurmade, Rajeev Sen, Jyotsna Sen, Monika Dalal

Abstract:

Introduction: Ovarian masses are common forms of neoplasm in women and represent 2/3rd of gynaecological malignancies. A pre-operative suggestion of malignancy can guide the gynecologist to refer women with suspected pelvic mass to a gynecological oncologist for appropriate therapy and optimized treatment, which can improve survival. In the younger age group preoperative differentiation into benign or malignant pathology can decide for conservative or radical surgery. Imaging modalities have a definite role in establishing the diagnosis. By using International Ovarian Tumor Analysis (IOTA) classification with sonography, costly radiological methods like Magnetic Resonance Imaging (MRI) / computed tomography (CT) scan can be reduced, especially in developing countries like India. Thus, this study is being undertaken to evaluate the role of clinical methods and sonography for diagnosis of the nature of the ovarian tumor. Material And Methods: This prospective observational study was conducted on 40 patients presenting with ovarian masses, in the Department of Obstetrics and Gynaecology, at a tertiary care center in northern India. Functional cysts were excluded. Ultrasonography and color Doppler were performed on all the cases.IOTA rules were applied, which take into account locularity, size, presence of solid components, acoustic shadow, dopper flow etc . Magnetic Resonance Imaging (MRI) / computed tomography (CT) scans abdomen and pelvis were done in cases where sonography was inconclusive. In inoperable cases, Fine needle aspiration cytology (FNAC) was done. The histopathology report after surgery and cytology report after FNAC was correlated statistically with the pre-operative diagnosis made clinically and sonographically using IOTA rules. Statistical Analysis: Descriptive measures were analyzed by using mean and standard deviation and the Student t-test was applied and the proportion was analyzed by applying the chi-square test. Inferential measures were analyzed by sensitivity, specificity, negative predictive value, and positive predictive value. Results: Provisional diagnosis of the benign tumor was made in 16(42.5%) and of the malignant tumor was made in 24(57.5%) patients on the basis of clinical findings. With IOTA simple rules on sonography, 15(37.5%) were found to be benign, while 23 (57.5%) were found to be malignant and findings were inconclusive in 2 patients (5%). FNAC/Histopathology reported that benign ovarian tumors were 14 (35%) and 26(65%) were malignant, which was taken as the gold standard. The clinical finding alone was found to have a sensitivity of 66.6% and a specificity of 90.9%. USG alone had a sensitivity of 86% and a specificity of 80%. When clinical findings and IOTA simple rules of sonography were combined (excluding inconclusive masses), the sensitivity and specificity were 83.3% and 92.3%, respectively. While including inconclusive masses, sensitivity came out to be 91.6% and specificity was 89.2. Conclusion: IOTA's simple sonography rules are highly sensitive and specific in the prediction of ovarian malignancy and also easy to use and easily reproducible. Thus, combining clinical examination with USG will help in the better management of patients in terms of time, cost and better prognosis. This will also avoid the need for costlier modalities like CT, and MRI.

Keywords: benign, international ovarian tumor analysis classification, malignant, ovarian tumours, sonography

Procedia PDF Downloads 78
34 On the Influence of Sleep Habits for Predicting Preterm Births: A Machine Learning Approach

Authors: C. Fernandez-Plaza, I. Abad, E. Diaz, I. Diaz

Abstract:

Births occurring before the 37th week of gestation are considered preterm births. A threat of preterm is defined as the beginning of regular uterine contractions, dilation and cervical effacement between 23 and 36 gestation weeks. To author's best knowledge, the factors that determine the beginning of the birth are not completely defined yet. In particular, the incidence of sleep habits on preterm births is weekly studied. The aim of this study is to develop a model to predict the factors affecting premature delivery on pregnancy, based on the above potential risk factors, including those derived from sleep habits and light exposure at night (introduced as 12 variables obtained by a telephone survey using two questionnaires previously used by other authors). Thus, three groups of variables were included in the study (maternal, fetal and sleep habits). The study was approved by Research Ethics Committee of the Principado of Asturias (Spain). An observational, retrospective and descriptive study was performed with 481 births between January 1, 2015 and May 10, 2016 in the University Central Hospital of Asturias (Spain). A statistical analysis using SPSS was carried out to compare qualitative and quantitative variables between preterm and term delivery. Chi-square test qualitative variable and t-test for quantitative variables were applied. Statistically significant differences (p < 0.05) between preterm vs. term births were found for primiparity, multi-parity, kind of conception, place of residence or premature rupture of membranes and interruption during nights. In addition to the statistical analysis, machine learning methods to look for a prediction model were tested. In particular, tree based models were applied as the trade-off between performance and interpretability is especially suitable for this study. C5.0, recursive partitioning, random forest and tree bag models were analysed using caret R-package. Cross validation with 10-folds and parameter tuning to optimize the methods were applied. In addition, different noise reduction methods were applied to the initial data using NoiseFiltersR package. The best performance was obtained by C5.0 method with Accuracy 0.91, Sensitivity 0.93, Specificity 0.89 and Precision 0.91. Some well known preterm birth factors were identified: Cervix Dilation, maternal BMI, Premature rupture of membranes or nuchal translucency analysis in the first trimester. The model also identifies other new factors related to sleep habits such as light through window, bedtime on working days, usage of electronic devices before sleeping from Mondays to Fridays or change of sleeping habits reflected in the number of hours, in the depth of sleep or in the lighting of the room. IF dilation < = 2.95 AND usage of electronic devices before sleeping from Mondays to Friday = YES and change of sleeping habits = YES, then preterm is one of the predicting rules obtained by C5.0. In this work a model for predicting preterm births is developed. It is based on machine learning together with noise reduction techniques. The method maximizing the performance is the one selected. This model shows the influence of variables related to sleep habits in preterm prediction.

Keywords: machine learning, noise reduction, preterm birth, sleep habit

Procedia PDF Downloads 145
33 Interactive Virtual Patient Simulation Enhances Pharmacology Education and Clinical Practice

Authors: Lyndsee Baumann-Birkbeck, Sohil A. Khan, Shailendra Anoopkumar-Dukie, Gary D. Grant

Abstract:

Technology-enhanced education tools are being rapidly integrated into health programs globally. These tools provide an interactive platform for students and can be used to deliver topics in various modes including games and simulations. Simulations are of particular interest to healthcare education, where they are employed to enhance clinical knowledge and help to bridge the gap between theory and practice. Simulations will often assess competencies for practical tasks, yet limited research examines the effects of simulation on student perceptions of their learning. The aim of this study was to determine the effects of an interactive virtual patient simulation for pharmacology education and clinical practice on student knowledge, skills and confidence. Ethics approval for the study was obtained from Griffith University Research Ethics Committee (PHM/11/14/HREC). The simulation was intended to replicate the pharmacy environment and patient interaction. The content was designed to enhance knowledge of proton-pump inhibitor pharmacology, role in therapeutics and safe supply to patients. The tool was deployed into a third-year clinical pharmacology and therapeutics course. A number of core practice areas were examined including the competency domains of questioning, counselling, referral and product provision. Baseline measures of student self-reported knowledge, skills and confidence were taken prior to the simulation using a specifically designed questionnaire. A more extensive questionnaire was deployed following the virtual patient simulation, which also included measures of student engagement with the activity. A quiz assessing student factual and conceptual knowledge of proton-pump inhibitor pharmacology and related counselling information was also included in both questionnaires. Sixty-one students (response rate >95%) from two cohorts (2014 and 2015) participated in the study. Chi-square analyses were performed and data analysed using Fishers exact test. Results demonstrate that student knowledge, skills and confidence within the competency domains of questioning, counselling, referral and product provision, show improvement following the implementation of the virtual patient simulation. Statistically significant (p<0.05) improvement occurred in ten of the possible twelve self-reported measurement areas. Greatest magnitude of improvement occurred in the area of counselling (student confidence p<0.0001). Student confidence in all domains (questioning, counselling, referral and product provision) showed a marked increase. Student performance in the quiz also improved, demonstrating a 10% improvement overall for pharmacology knowledge and clinical practice following the simulation. Overall, 85% of students reported the simulation to be engaging and 93% of students felt the virtual patient simulation enhanced learning. The data suggests that the interactive virtual patient simulation developed for clinical pharmacology and therapeutics education enhanced students knowledge, skill and confidence, with respect to the competency domains of questioning, counselling, referral and product provision. These self-reported measures appear to translate to learning outcomes, as demonstrated by the improved student performance in the quiz assessment item. Future research of education using virtual simulation should seek to incorporate modern quantitative measures of student learning and engagement, such as eye tracking.

Keywords: clinical simulation, education, pharmacology, simulation, virtual learning

Procedia PDF Downloads 336
32 Detection of Patient Roll-Over Using High-Sensitivity Pressure Sensors

Authors: Keita Nishio, Takashi Kaburagi, Yosuke Kurihara

Abstract:

Recent advances in medical technology have served to enhance average life expectancy. However, the total time for which the patients are prescribed complete bedrest has also increased. With patients being required to maintain a constant lying posture- also called bedsore- development of a system to detect patient roll-over becomes imperative. For this purpose, extant studies have proposed the use of cameras, and favorable results have been reported. Continuous on-camera monitoring, however, tends to violate patient privacy. We have proposed unconstrained bio-signal measurement system that could detect body-motion during sleep and does not violate patient’s privacy. Therefore, in this study, we propose a roll-over detection method by the date obtained from the bi-signal measurement system. Signals recorded by the sensor were assumed to comprise respiration, pulse, body motion, and noise components. Compared the body-motion and respiration, pulse component, the body-motion, during roll-over, generate large vibration. Thus, analysis of the body-motion component facilitates detection of the roll-over tendency. The large vibration associated with the roll-over motion has a great effect on the Root Mean Square (RMS) value of time series of the body motion component calculated during short 10 s segments. After calculation, the RMS value during each segment was compared to a threshold value set in advance. If RMS value in any segment exceeded the threshold, corresponding data were considered to indicate occurrence of a roll-over. In order to validate the proposed method, we conducted experiment. A bi-directional microphone was adopted as a high-sensitivity pressure sensor and was placed between the mattress and bedframe. Recorded signals passed through an analog Band-pass Filter (BPF) operating over the 0.16-16 Hz bandwidth. BPF allowed the respiration, pulse, and body-motion to pass whilst removing the noise component. Output from BPF was A/D converted with the sampling frequency 100Hz, and the measurement time was 480 seconds. The number of subjects and data corresponded to 5 and 10, respectively. Subjects laid on a mattress in the supine position. During data measurement, subjects—upon the investigator's instruction—were asked to roll over into four different positions—supine to left lateral, left lateral to prone, prone to right lateral, and right lateral to supine. Recorded data was divided into 48 segments with 10 s intervals, and the corresponding RMS value for each segment was calculated. The system was evaluated by the accuracy between the investigator’s instruction and the detected segment. As the result, an accuracy of 100% was achieved. While reviewing the time series of recorded data, segments indicating roll-over tendencies were observed to demonstrate a large amplitude. However, clear differences between decubitus and the roll-over motion could not be confirmed. Extant researches possessed a disadvantage in terms of patient privacy. The proposed study, however, demonstrates more precise detection of patient roll-over tendencies without violating their privacy. As a future prospect, decubitus estimation before and after roll-over could be attempted. Since in this paper, we could not confirm the clear differences between decubitus and the roll-over motion, future studies could be based on utilization of the respiration and pulse components.

Keywords: bedsore, high-sensitivity pressure sensor, roll-over, unconstrained bio-signal measurement

Procedia PDF Downloads 120
31 Measurement System for Human Arm Muscle Magnetic Field and Grip Strength

Authors: Shuai Yuan, Minxia Shi, Xu Zhang, Jianzhi Yang, Kangqi Tian, Yuzheng Ma

Abstract:

The precise measurement of muscle activities is essential for understanding the function of various body movements. This work aims to develop a muscle magnetic field signal detection system based on mathematical analysis. Medical research has underscored that early detection of muscle atrophy, coupled with lifestyle adjustments such as dietary control and increased exercise, can significantly enhance muscle-related diseases. Currently, surface electromyography (sEMG) is widely employed in research as an early predictor of muscle atrophy. Nonetheless, the primary limitation of using sEMG to forecast muscle strength is its inability to directly measure the signals generated by muscles. Challenges arise from potential skin-electrode contact issues due to perspiration, leading to inaccurate signals or even signal loss. Additionally, resistance and phase are significantly impacted by adipose layers. The recent emergence of optically pumped magnetometers introduces a fresh avenue for bio-magnetic field measurement techniques. These magnetometers possess high sensitivity and obviate the need for a cryogenic environment unlike superconducting quantum interference devices (SQUIDs). They detect muscle magnetic field signals in the range of tens to thousands of femtoteslas (fT). The utilization of magnetometers for capturing muscle magnetic field signals remains unaffected by issues of perspiration and adipose layers. Since their introduction, optically pumped atomic magnetometers have found extensive application in exploring the magnetic fields of organs such as cardiac and brain magnetism. The optimal operation of these magnetometers necessitates an environment with an ultra-weak magnetic field. To achieve such an environment, researchers usually utilize a combination of active magnetic compensation technology with passive magnetic shielding technology. Passive magnetic shielding technology uses a magnetic shielding device built with high permeability materials to attenuate the external magnetic field to a few nT. Compared with more layers, the coils that can generate a reverse magnetic field to precisely compensate for the residual magnetic fields are cheaper and more flexible. To attain even lower magnetic fields, compensation coils designed by Biot-Savart law are involved to generate a counteractive magnetic field to eliminate residual magnetic fields. By solving the magnetic field expression of discrete points in the target region, the parameters that determine the current density distribution on the plane can be obtained through the conventional target field method. The current density is obtained from the partial derivative of the stream function, which can be represented by the combination of trigonometric functions. Optimization algorithms in mathematics are introduced into coil design to obtain the optimal current density distribution. A one-dimensional linear regression analysis was performed on the collected data, obtaining a coefficient of determination R2 of 0.9349 with a p-value of 0. This statistical result indicates a stable relationship between the peak-to-peak value (PPV) of the muscle magnetic field signal and the magnitude of grip strength. This system is expected to be a widely used tool for healthcare professionals to gain deeper insights into the muscle health of their patients.

Keywords: muscle magnetic signal, magnetic shielding, compensation coils, trigonometric functions.

Procedia PDF Downloads 55
30 Implementation of Green Deal Policies and Targets in Energy System Optimization Models: The TEMOA-Europe Case

Authors: Daniele Lerede, Gianvito Colucci, Matteo Nicoli, Laura Savoldi

Abstract:

The European Green Deal is the first internationally agreed set of measures to contrast climate change and environmental degradation. Besides the main target of reducing emissions by at least 55% by 2030, it sets the target of accompanying European countries through an energy transition to make the European Union into a modern, resource-efficient, and competitive net-zero emissions economy by 2050, decoupling growth from the use of resources and ensuring a fair adaptation of all social categories to the transformation process. While the general purpose to allow the realization of the purposes of the Green Deal already dates back to 2019, strategies and policies keep being developed coping with recent circumstances and achievements. However, general long-term measures like the Circular Economy Action Plan, the proposals to shift from fossil natural gas to renewable and low-carbon gases, in particular biomethane and hydrogen, and to end the sale of gasoline and diesel cars by 2035, will all have significant effects on energy supply and demand evolution across the next decades. The interactions between energy supply and demand over long-term time frames are usually assessed via energy system models to derive useful insights for policymaking and to address technological choices and research and development. TEMOA-Europe is a newly developed energy system optimization model instance based on the minimization of the total cost of the system under analysis, adopting a technologically integrated, detailed, and explicit formulation and considering the evolution of the system in partial equilibrium in competitive markets with perfect foresight. TEMOA-Europe is developed on the TEMOA platform, an open-source modeling framework totally implemented in Python, therefore ensuring third-party verification even on large and complex models. TEMOA-Europe is based on a single-region representation of the European Union and EFTA countries on a time scale between 2005 and 2100, relying on a set of assumptions for socio-economic developments based on projections by the International Energy Outlook and a large technological dataset including 7 sectors: the upstream and power sectors for the production of all energy commodities and the end-use sectors, including industry, transport, residential, commercial and agriculture. TEMOA-Europe also includes an updated hydrogen module considering its production, storage, transportation, and utilization. Besides, it can rely on a wide set of innovative technologies, ranging from nuclear fusion and electricity plants equipped with CCS in the power sector to electrolysis-based steel production processes and steel in the industrial sector – with a techno-economic characterization based on public literature – to produce insightful energy scenarios and especially to cope with the very long analyzed time scale. The aim of this work is to examine in detail the scheme of measures and policies for the realization of the purposes of the Green Deal and to transform them into a set of constraints and new socio-economic development pathways. Based on them, TEMOA-Europe will be used to produce and comparatively analyze scenarios to assess the consequences of Green Deal-related measures on the future evolution of the energy mix over the whole energy system in an economic optimization environment.

Keywords: European Green Deal, energy system optimization modeling, scenario analysis, TEMOA-Europe

Procedia PDF Downloads 104
29 Burkholderia Cepacia ST 767 Causing a Three Years Nosocomial Outbreak in a Hemodialysis Unit

Authors: Gousilin Leandra Rocha Da Silva, Stéfani T. A. Dantas, Bruna F. Rossi, Erika R. Bonsaglia, Ivana G. Castilho, Terue Sadatsune, Ary Fernandes Júnior, Vera l. M. Rall

Abstract:

Kidney failure causes decreased diuresis and accumulation of nitrogenous substances in the body. To increase patient survival, hemodialysis is used as a partial substitute for renal function. However, contamination of the water used in this treatment, causing bacteremia in patients, is a worldwide concern. The Burkholderia cepacia complex (Bcc), a group of bacteria with more than 20 species, is frequently isolated from hemodialysis water samples and comprises opportunistic bacteria, affecting immunosuppressed patients, due to its wide variety of virulence factors, in addition to innate resistance to several antimicrobial agents, contributing to the permanence in the hospital environment and to the pathogenesis in the host. The objective of the present work was to characterize molecularly and phenotypically Bcc isolates collected from the water and dialysate of the Hemodialysis Unit and from the blood of patients at a Public Hospital in Botucatu, São Paulo, Brazil, between 2019 and 2021. We used 33 Bcc isolates, previously obtained from blood cultures from patients with bacteremia undergoing hemodialysis treatment (2019-2021) and 24 isolates obtained from water and dialysate samples in a Hemodialysis Unit (same period). The recA gene was sequenced to identify the specific species among the Bcc group. All isolates were tested for the presence of some genes that encode virulence factors such as cblA, esmR, zmpA and zmpB. Considering the epidemiology of the outbreak, the Bcc isolates were molecularly characterized by Multi Locus Sequence Type (MLST) and by pulsed-field gel electrophoresis (PFGE). The verification and quantification of biofilm in a polystyrene microplate were performed by submitting the isolates to different incubation temperatures (20°C, average water temperature and 35°C, optimal temperature for group growth). The antibiogram was performed with disc diffusion tests on agar, using discs impregnated with cefepime (30µg), ceftazidime (30µg), ciprofloxacin (5µg), gentamicin (10µg), imipenem (10µg), amikacin 30µg), sulfametazol/trimethoprim (23.75/1.25µg) and ampicillin/sulbactam (10/10µg). The presence of ZmpB was identified in all isolates, while ZmpA was observed in 96.5% of the isolates, while none of them presented the cblA and esmR genes. The antibiogram of the 33 human isolates indicated that all were resistant to gentamicin, colistin, ampicillin/sulbactam and imipenem. 16 (48.5%) isolates were resistant to amikacin and lower rates of resistance were observed for meropenem, ceftazidime, cefepime, ciprofloxacin and piperacycline/tazobactam (6.1%). All isolates were sensitive to sulfametazol/trimethoprim, levofloxacin and tigecycline. As for the water isolates, resistance was observed only to gentamicin (34.8%) and imipenem (17.4%). According to PFGE results, all isolates obtained from humans and water belonged to the same pulsotype (1), which was identified by recA sequencing as B. cepacia¸, belonging to sequence type ST-767. By observing a single pulse type over three years, one can observe the persistence of this isolate in the pipeline, contaminating patients undergoing hemodialysis, despite the routine disinfection of water with peracetic acid. This persistence is probably due to the production of biofilm, which protects bacteria from disinfectants and, making this scenario more critical, several isolates proved to be multidrug-resistant (resistance to at least three groups of antimicrobials), turning the patient care even more difficult.

Keywords: hemodialysis, burkholderia cepacia, PFGE, MLST, multi drug resistance

Procedia PDF Downloads 98
28 Environmental Life Cycle Assessment of Circular, Bio-Based and Industrialized Building Envelope Systems

Authors: N. Cihan KayaçEtin, Stijn Verdoodt, Alexis Versele

Abstract:

The construction industry is accounted for one-third of all waste generated in the European Union (EU) countries. The Circular Economy Action Plan of the EU aims to tackle this issue and aspires to enhance the sustainability of the construction industry by adopting more circular principles and bio-based material use. The Interreg Circular Bio-Based Construction Industry (CBCI) project was conceived to research how this adoption can be facilitated. For this purpose, an approach is developed that integrates technical, legal and social aspects and provides business models for circular designing and building with bio-based materials. In the scope of the project, the research outputs are to be displayed in a real-life setting by constructing a demo terraced single-family house, the living lab (LL) located in Ghent (Belgium). The realization of the LL is conducted in a step-wise approach that includes iterative processes for design, description, criteria definition and multi-criteria assessment of building components. The essence of the research lies within the exploratory approach to the state-of-art building envelope and technical systems options for achieving an optimum combination for a circular and bio-based construction. For this purpose, nine preliminary designs (PD) for building envelope are generated, which consist of three basic construction methods: masonry, lightweight steel construction and wood framing construction supplemented with bio-based construction methods like cross-laminated timber (CLT) and massive wood framing. A comparative analysis on the PDs was conducted by utilizing several complementary tools to assess the circularity. This paper focuses on the life cycle assessment (LCA) approach for evaluating the environmental impact of the LL Ghent. The adoption of an LCA methodology was considered critical for providing a comprehensive set of environmental indicators. The PDs were developed at the component level, in particular for the (i) inclined roof, (ii-iii) front and side façade, (iv) internal walls and (v-vi) floors. The assessment was conducted on two levels; component and building level. The options for each component were compared at the first iteration and then, the PDs as an assembly of components were further analyzed. The LCA was based on a functional unit of one square meter of each component and CEN indicators were utilized for impact assessment for a reference study period of 60 years. A total of 54 building components that are composed of 31 distinct materials were evaluated in the study. The results indicate that wood framing construction supplemented with bio-based construction methods performs environmentally better than the masonry or steel-construction options. An analysis on the correlation between the total weight of components and environmental impact was also conducted. It was seen that masonry structures display a high environmental impact and weight, steel structures display low weight but relatively high environmental impact and wooden framing construction display low weight and environmental impact. The study provided valuable outputs in two levels: (i) several improvement options at component level with substitution of materials with critical weight and/or impact per unit, (ii) feedback on environmental performance for the decision-making process during the design phase of a circular single family house.

Keywords: circular and bio-based materials, comparative analysis, life cycle assessment (LCA), living lab

Procedia PDF Downloads 182
27 Impact of Water Interventions under WASH Program in the South-west Coastal Region of Bangladesh

Authors: S. M. Ashikur Elahee, Md. Zahidur Rahman, Md. Shofiqur Rahman

Abstract:

This study evaluated the impact of different water interventions under WASH program on access of household's to safe drinking water. Following survey method, the study was carried out in two Upazila of South-west coastal region of Bangladesh namely Koyra from Khulna and Shymnagar from Satkhira district. Being an explanatory study, a total of 200 household's selected applying random sampling technique were interviewed using a structured interview schedule. The predicted probability suggests that around 62 percent household's are out of year-round access to safe drinking water whereby, only 25 percent household's have access at SPHERE standard (913 Liters/per person/per year). Besides, majority (78 percent) of the household's have not accessed at both indicators simultaneously. The distance from household residence to the water source varies from 0 to 25 kilometer with an average distance of 2.03 kilometers. The study also reveals that the increase in monthly income around BDT 1,000 leads to additional 11 liters (coefficient 0.01 at p < 0.1) consumption of safe drinking water for a person/year. As expected, lining up time has significant negative relationship with dependent variables i.e., for higher lining up time, the probability of getting access for both SPHERE standard and year round access variables becomes lower. According to ordinary least square (OLS) regression results, water consumption decreases at 93 liters for per person/year of a household if one member is added to that household. Regarding water consumption intensity, ordered logistic regression (OLR) model shows that one-minute increase of lining up time for water collection tends to reduce water consumption intensity. On the other hand, as per OLS regression results, for one-minute increase of lining up time, the water consumption decreases by around 8 liters. Considering access to Deep Tube Well (DTW) as a reference dummy, in OLR, the household under Pond Sand Filter (PSF), Shallow Tube Well (STW), Reverse Osmosis (RO) and Rainwater Harvester System (RWHS) are respectively 37 percent, 29 percent, 61 percent and 27 percent less likely to ensure year round access of water consumption. In line of health impact, different type of water born diseases like diarrhea, cholera, and typhoid are common among the coastal community caused by microbial impurities i.e., Bacteria, Protozoa. High turbidity and TDS in pond water caused by reduction of water depth, presence of suspended particle and inorganic salt stimulate the growth of bacteria, protozoa, and algae causes affecting health hazard. Meanwhile, excessive growth of Algae in pond water caused by excessive nitrate in drinking water adversely effects on child health. In lieu of ensuring access at SPHERE standard, we need to increase the number of water interventions at reasonable distance, preferably a half kilometer away from the dwelling place, ensuring community peoples involved with its installation process where collectively owned water intervention is found more effective than privately owned. In addition, a demand-responsive approach to supply of piped water should be adopted to allow consumer demand to guide investment in domestic water supply in future.

Keywords: access, impact, safe drinking water, Sphere standard, water interventions

Procedia PDF Downloads 218
26 Best Practices and Recommendations for CFD Simulation of Hydraulic Spool Valves

Authors: Jérémy Philippe, Lucien Baldas, Batoul Attar, Jean-Charles Mare

Abstract:

The proposed communication deals with the research and development of a rotary direct-drive servo valve for aerospace applications. A key challenge of the project is to downsize the electromagnetic torque motor by reducing the torque required to drive the rotary spool. It is intended to optimize the spool and the sleeve geometries by combining a Computational Fluid Dynamics (CFD) approach with commercial optimization software. The present communication addresses an important phase of the project, which consists firstly of gaining confidence in the simulation results. It is well known that the force needed to pilot a sliding spool valve comes from several physical effects: hydraulic forces, friction and inertia/mass of the moving assembly. Among them, the flow force is usually a major contributor to the steady-state (or Root Mean Square) driving torque. In recent decades, CFD has gradually become a standard simulation tool for studying fluid-structure interactions. However, in the particular case of high-pressure valve design, the authors have experienced that the calculated overall hydraulic force depends on the parameterization and options used to build and run the CFD model. To solve this issue, the authors have selected the standard case of the linear spool valve, which is addressed in detail in numerous scientific references (analytical models, experiments, CFD simulations). The first CFD simulations run by the authors have shown that the evolution of the equivalent discharge coefficient vs. Reynolds number at the metering orifice corresponds well to the values that can be predicted by the classical analytical models. Oppositely, the simulated flow force was found to be quite different from the value calculated analytically. This drove the authors to investigate minutely the influence of the studied domain and the setting of the CFD simulation. It was firstly shown that the flow recirculates in the inlet and outlet channels if their length is not sufficient regarding their hydraulic diameter. The dead volume on the uncontrolled orifice side also plays a significant role. These examples highlight the influence of the geometry of the fluid domain considered. The second action was to investigate the influence of the type of mesh, the turbulence models and near-wall approaches, and the numerical solver and discretization scheme order. Two approaches were used to determine the overall hydraulic force acting on the moving spool. First, the force was deduced from the momentum balance on a control domain delimited by the valve inlet and outlet and the spool walls. Second, the overall hydraulic force was calculated from the integral of pressure and shear forces acting at the boundaries of the fluid domain. This underlined the significant contribution of the viscous forces acting on the spool between the inlet and outlet orifices, which are generally not considered in the literature. This also emphasized the influence of the choices made for the implementation of CFD calculation and results analysis. With the step-by-step process adopted to increase confidence in the CFD simulations, the authors propose a set of best practices and recommendations for the efficient use of CFD to design high-pressure spool valves.

Keywords: computational fluid dynamics, hydraulic forces, servovalve, rotary servovalve

Procedia PDF Downloads 42
25 The Use of Modern Technologies and Computers in the Archaeological Surveys of Sistan in Eastern Iran

Authors: Mahyar MehrAfarin

Abstract:

The Sistan region in eastern Iran is a significant archaeological area in Iran and the Middle East, encompassing 10,000 square kilometers. Previous archeological field surveys have identified 1662 ancient sites dating from prehistoric periods to the Islamic period. Research Aim: This article aims to explore the utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, and the benefits derived from their implementation. Methodology: The research employs a descriptive-analytical approach combined with field methods. New technologies and software, such as GPS, drones, magnetometers, equipped cameras, satellite images, and software programs like GIS, Map source, and Excel, were utilized to collect information and analyze data. Findings: The use of modern technologies and computers in archaeological field surveys proved to be essential. Traditional archaeological activities, such as excavation and field surveys, are time-consuming and costly. Employing modern technologies helps in preserving ancient sites, accurately recording archaeological data, reducing errors and mistakes, and facilitating correct and accurate analysis. Creating a comprehensive and accessible database, generating statistics, and producing graphic designs and diagrams are additional advantages derived from the use of efficient technologies in archaeology. Theoretical Importance: The integration of computers and modern technologies in archaeology contributes to interdisciplinary collaborations and facilitates the involvement of specialists from various fields, such as geography, history, art history, anthropology, laboratory sciences, and computer engineering. The utilization of computers in archaeology spanned across diverse areas, including database creation, statistical analysis, graphics implementation, laboratory and engineering applications, and even artificial intelligence, which remains an unexplored area in Iranian archaeology. Data Collection and Analysis Procedures: Information was collected using modern technologies and software, capturing geographic coordinates, aerial images, archeogeophysical data, and satellite images. This data was then inputted into various software programs for analysis, including GIS, Map source, and Excel. The research employed both descriptive and analytical methods to present findings effectively. Question Addressed: The primary question addressed in this research is how the use of modern technologies and computers in archeological field surveys in Sistan, Iran, can enhance archaeological data collection, preservation, analysis, and accessibility. Conclusion: The utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, has proven to be necessary and beneficial. These technologies aid in preserving ancient sites, accurately recording archaeological data, reducing errors, and facilitating comprehensive analysis. The creation of accessible databases, statistics generation, graphic designs, and interdisciplinary collaborations are further advantages observed. It is recommended to explore the potential of artificial intelligence in Iranian archaeology as an unexplored area. The research has implications for cultural heritage organizations, archaeology students, and universities involved in archaeological field surveys in Sistan and Baluchistan province. Additionally, it contributes to enhancing the understanding and preservation of Iran's archaeological heritage.

Keywords: Iran, sistan, archaeological surveys, computer use, modern technologies

Procedia PDF Downloads 79
24 Pre-Cancerigene Injuries Related to Human Papillomavirus: Importance of Cervicography as a Complementary Diagnosis Method

Authors: Denise De Fátima Fernandes Barbosa, Tyane Mayara Ferreira Oliveira, Diego Jorge Maia Lima, Paula Renata Amorim Lessa, Ana Karina Bezerra Pinheiro, Cintia Gondim Pereira Calou, Glauberto Da Silva Quirino, Hellen Lívia Oliveira Catunda, Tatiana Gomes Guedes, Nicolau Da Costa

Abstract:

The aim of this study is to evaluate the use of Digital Cervicography (DC) in the diagnosis of precancerous lesions related to Human Papillomavirus (HPV). Cross-sectional study with a quantitative approach, of evaluative type, held in a health unit linked to the Pro Dean of Extension of the Federal University of Ceará, in the period of July to August 2015 with a sample of 33 women. Data collecting was conducted through interviews with enforcement tool. Franco (2005) standardized the technique used for DC. Polymerase Chain Reaction (PCR) was performed to identify high-risk HPV genotypes. DC were evaluated and classified by 3 judges. The results of DC and PCR were classified as positive, negative or inconclusive. The data of the collecting instruments were compiled and analyzed by the software Statistical Package for Social Sciences (SPSS) with descriptive statistics and cross-references. Sociodemographic, sexual and reproductive variables were analyzed through absolute frequencies (N) and their respective percentage (%). Kappa coefficient (κ) was applied to determine the existence of agreement between the DC of reports among evaluators with PCR and also among the judges about the DC results. The Pearson's chi-square test was used for analysis of sociodemographic, sexual and reproductive variables with the PCR reports. It was considered statistically significant (p<0.05). Ethical aspects of research involving human beings were respected, according to 466/2012 Resolution. Regarding the socio-demographic profile, the most prevalent ages and equally were those belonging to the groups 21-30 and 41-50 years old (24.2%). The brown color was reported in excess (84.8%) and 96.9% out of them had completed primary and secondary school or studying. 51.5% were married, 72.7% Catholic, 54.5% employed and 48.5% with income between one and two minimum wages. As for the sexual and reproductive characteristics, prevailed heterosexual (93.9%) who did not use condoms during sexual intercourse (72.7%). 51.5% had a previous history of Sexually Transmitted Infection (STI), and HPV the most prevalent STI (76.5%). 57.6% did not use contraception, 78.8% underwent examination Cancer Prevention Uterus (PCCU) with shorter time interval or equal to one year, 72.7% had no cases of Cervical Cancer in the family, 63.6% were multiparous and 97% were not vaccinated against HPV. DC identified good level of agreement between raters (κ=0.542), had a specificity of 77.8% and sensitivity of 25% when compared their results with PCR. Only the variable race showed a statistically significant association with CRP (p=0.042). DC had 100% acceptance amongst women in the sample, revealing the possibility of other experiments in using this method so that it proves as a viable technique. The DC positivity criteria were developed by nurses and these professionals also perform PCCU in Brazil, which means that DC can be an important complementary diagnostic method for the appreciation of these professional’s quality of examinations.

Keywords: gynecological examination, human papillomavirus, nursing, papillomavirus infections, uterine lasmsneop

Procedia PDF Downloads 299
23 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction

Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal

Abstract:

Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.

Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction

Procedia PDF Downloads 137
22 Consumer Preferences for Low-Carbon Futures: A Structural Equation Model Based on the Domestic Hydrogen Acceptance Framework

Authors: Joel A. Gordon, Nazmiye Balta-Ozkan, Seyed Ali Nabavi

Abstract:

Hydrogen-fueled technologies are rapidly advancing as a critical component of the low-carbon energy transition. In countries historically reliant on natural gas for home heating, such as the UK, hydrogen may prove fundamental for decarbonizing the residential sector, alongside other technologies such as heat pumps and district heat networks. While the UK government is set to take a long-term policy decision on the role of domestic hydrogen by 2026, there are considerable uncertainties regarding consumer preferences for ‘hydrogen homes’ (i.e., hydrogen-fueled appliances for space heating, hot water, and cooking. In comparison to other hydrogen energy technologies, such as road transport applications, to date, few studies have engaged with the social acceptance aspects of the domestic hydrogen transition, resulting in a stark knowledge deficit and pronounced risk to policymaking efforts. In response, this study aims to safeguard against undesirable policy measures by revealing the underlying relationships between the factors of domestic hydrogen acceptance and their respective dimensions: attitudinal, socio-political, community, market, and behavioral acceptance. The study employs an online survey (n=~2100) to gauge how different UK householders perceive the proposition of switching from natural gas to hydrogen-fueled appliances. In addition to accounting for housing characteristics (i.e., housing tenure, property type and number of occupants per dwelling) and several other socio-structural variables (e.g. age, gender, and location), the study explores the impacts of consumer heterogeneity on hydrogen acceptance by recruiting respondents from across five distinct groups: (1) fuel poor householders, (2) technology engaged householders, (3) environmentally engaged householders, (4) technology and environmentally engaged householders, and (5) a baseline group (n=~700) which filters out each of the smaller targeted groups (n=~350). This research design reflects the notion that supporting a socially fair and efficient transition to hydrogen will require parallel engagement with potential early adopters and demographic groups impacted by fuel poverty while also accounting strongly for public attitudes towards net zero. Employing a second-order multigroup confirmatory factor analysis (CFA) in Mplus, the proposed hydrogen acceptance model is tested to fit the data through a partial least squares (PLS) approach. In addition to testing differences between and within groups, the findings provide policymakers with critical insights regarding the significance of knowledge and awareness, safety perceptions, perceived community impacts, cost factors, and trust in key actors and stakeholders as potential explanatory factors of hydrogen acceptance. Preliminary results suggest that knowledge and awareness of hydrogen are positively associated with support for domestic hydrogen at the household, community, and national levels. However, with the exception of technology and/or environmentally engaged citizens, much of the population remains unfamiliar with hydrogen and somewhat skeptical of its application in homes. Knowledge and awareness present as critical to facilitating positive safety perceptions, alongside higher levels of trust and more favorable expectations for community benefits, appliance performance, and potential cost savings. Based on these preliminary findings, policymakers should be put on red alert about diffusing hydrogen into the public consciousness in alignment with energy security, fuel poverty, and net-zero agendas.

Keywords: hydrogen homes, social acceptance, consumer heterogeneity, heat decarbonization

Procedia PDF Downloads 114
21 Impact of Six-Minute Walk or Rest Break during Extended GamePlay on Executive Function in First Person Shooter Esport Players

Authors: Joanne DiFrancisco-Donoghue, Seth E. Jenny, Peter C. Douris, Sophia Ahmad, Kyle Yuen, Hillary Gan, Kenney Abraham, Amber Sousa

Abstract:

Background: Guidelines for the maintenance of health of esports players and the cognitive changes that accompany competitive gaming are understudied. Executive functioning is an important cognitive skill for an esports player. The relationship between executive functions and physical exercise has been well established. However, the effects of prolonged sitting regardless of physical activity level have not been established. Prolonged uninterrupted sitting reduces cerebral blood flow. Reduced cerebral blood flow is associated with lower cognitive function and fatigue. This decrease in cerebral blood flow has been shown to be offset by frequent and short walking breaks. These short breaks can be as little as 2 minutes at the 30-minute mark and 6 minutes following 60 minutes of prolonged sitting. The rationale is the increase in blood flow and the positive effects this has on metabolic responses. The primary purpose of this study was to evaluate executive function changes following 6-minute bouts of walking and complete rest mid-session, compared to no break, during prolonged gameplay in competitive first-person shooter (FPS) esports players. Methods: This study was conducted virtually due to the Covid-19 pandemic and was approved by the New York Institute of Technology IRB. Twelve competitive FPS participants signed written consent to participate in this randomized pilot study. All participants held a gold ranking or higher. Participants were asked to play for 2 hours on three separate days. Outcome measures to test executive function included the Color Stroop and the Tower of London tests which were administered online each day prior to gaming and at the completion of gaming. All participants completed the tests prior to testing for familiarization. One day of testing consisted of a 6-minute walk break after 60-75 minutes of play. The Rate of Perceived Exertion (RPE) was recorded. The participant continued to play for another 60-75 minutes and completed the tests again. Another day the participants repeated the same methods replacing the 6-minute walk with lying down and resting for 6 minutes. On the last day, the participant played continuously with no break for 2 hours and repeated the outcome tests pre and post-play. A Latin square was used to randomize the treatment order. Results: Using descriptive statistics, the largest change in mean reaction time incorrect congruent pre to post play was seen following the 6-minute walk (662.0 (609.6) ms pre to 602.8 (539.2) ms post), followed by the 6-minute rest group (681.7(618.1) ms pre to 666.3 (607.9) ms post), and with minimal change in the continuous group (594.0(534.1) ms pre to 589.6(552.9) ms post). The mean solution time was fastest in the resting condition (7774.6(6302.8)ms), followed by the walk condition (7929.4 (5992.8)ms), with the continuous condition being slowest (9337.3(7228.7)ms). the continuous group 9337.3(7228.7) ms; 7929.4 (5992.8 ) ms 774.6(6302.8) ms. Conclusion: Short walking breaks improve blood flow and reduce the risk of venous thromboembolism during prolonged sitting. This pilot study demonstrated that a low intensity 6 -minute walk break, following 60 minutes of play, may also improve executive function in FPS gamers.

Keywords: executive function, FPS, physical activity, prolonged sitting

Procedia PDF Downloads 226
20 Analysis of Vibration and Shock Levels during Transport and Handling of Bananas within the Post-Harvest Supply Chain in Australia

Authors: Indika Fernando, Jiangang Fei, Roger Stanley, Hossein Enshaei

Abstract:

Delicate produce such as fresh fruits are increasingly susceptible to physiological damage during the essential post-harvest operations such as transport and handling. Vibration and shock during the distribution are identified factors for produce damage within post-harvest supply chains. Mechanical damages caused during transit may significantly diminish the quality of fresh produce which may also result in a substantial wastage. Bananas are one of the staple fruit crops and the most sold supermarket produce in Australia. It is also the largest horticultural industry in the state of Queensland where 95% of the total production of bananas are cultivated. This results in significantly lengthy interstate supply chains where fruits are exposed to prolonged vibration and shocks. This paper is focused on determining the shock and vibration levels experienced by packaged bananas during transit from the farm gate to the retail market. Tri-axis acceleration data were captured by custom made accelerometer based data loggers which were set to a predetermined sampling rate of 400 Hz. The devices recorded data continuously for 96 Hours in the interstate journey of nearly 3000 Km from the growing fields in far north Queensland to the central distribution centre in Melbourne in Victoria. After the bananas were ripened at the ripening facility in Melbourne, the data loggers were used to capture the transport and handling conditions from the central distribution centre to three retail outlets within the outskirts of Melbourne. The quality of bananas were assessed before and after transport at each location along the supply chain. Time series vibration and shock data were used to determine the frequency and the severity of the transient shocks experienced by the packages. Frequency spectrogram was generated to determine the dominant frequencies within each segment of the post-harvest supply chain. Root Mean Square (RMS) acceleration levels were calculated to characterise the vibration intensity during transport. Data were further analysed by Fast Fourier Transform (FFT) and the Power Spectral Density (PSD) profiles were generated to determine the critical frequency ranges. It revealed the frequency range in which the escalated energy levels were transferred to the packages. It was found that the vertical vibration was the highest and the acceleration levels mostly oscillated between ± 1g during transport. Several shock responses were recorded exceeding this range which were mostly attributed to package handling. These detrimental high impact shocks may eventually lead to mechanical damages in bananas such as impact bruising, compression bruising and neck injuries which affect their freshness and visual quality. It was revealed that the frequency range between 0-5 Hz and 15-20 Hz exert an escalated level of vibration energy to the packaged bananas which may result in abrasion damages such as scuffing, fruit rub and blackened rub. Further research is indicated specially in the identified critical frequency ranges to minimise exposure of fruits to the harmful effects of vibration. Improving the handling conditions and also further study on package failure mechanisms when exposed to transient shock excitation will be crucial to improve the visual quality of bananas within the post-harvest supply chain in Australia.

Keywords: bananas, handling, post-harvest, supply chain, shocks, transport, vibration

Procedia PDF Downloads 190
19 The Multiplier Effects of Intelligent Transport System to Nigerian Economy

Authors: Festus Okotie

Abstract:

Nigeria is the giant of Africa with great and diverse transport potentials yet to be fully tapped into and explored.it is the most populated nation in Africa with nearly 200 million people, the sixth largest oil producer overall and largest oil producer in Africa with proven oil and gas reserves of 37 billion barrels and 192 trillion cubic feet, over 300 square kilometers of arable land and significant deposits of largely untapped minerals. A world bank indicator which measures trading across border ranked Nigeria at 183 out of 185 countries in 2017 and although different governments in the past made efforts through different interventions such as 2007 ports reforms led by Ngozi Okonjo-Iweala, a former minister of Finance and world bank managing director also attempted to resolve some of the challenges such as infrastructure shortcomings, policy and regulatory inconsistencies, overlapping functions and duplicated roles among the different MDA’S. It is one of the fundamental structures smart nations and cities are using to improve the living conditions of its citizens and achieving sustainability. Examples of some of its benefits includes tracking high pedestrian areas, traffic patterns, railway stations, planning and scheduling bus times, it also enhances interoperability, creates alerts of transport situation and has swift capacity to share information among the different platforms and transport modes. It also offers a comprehensive approach to risk management, putting emergency procedures and response capabilities in place, identifying dangers, including vandalism or violence, fare evasion, and medical emergencies. The Nigerian transport system is urgently in need of modern infrastructures such as ITS. Smart city transport technology helps cities to function productively, while improving services for businesses and lives of is citizens. This technology has the ability to improve travel across traditional modes of transport, such as cars and buses, with immediate benefits for city dwellers and also helps in managing transport systems such as dangerous weather conditions, heavy traffic, and unsafe speeds which can result in accidents and loss of lives. Intelligent transportation systems help in traffic control such as permitting traffic lights to react to changing traffic patterns, instead of working on a fixed schedule in traffic. Intelligent transportation systems is very important in Nigeria’s transportation sector and so would require trained personnel to drive its efficiency to greater height because the purpose of introducing it is to add value and at the same time reduce motor vehicle miles and traffic congestion which is a major challenge around Tin can island and Apapa Port, a major transportation hub in Nigeria. The need for the federal government, state governments, houses of assembly to organise a national transportation workshop to begin the process of addressing the challenges in our nation’s transport sector is highly expedient and so bills that will facilitate the implementation of policies to promote intelligent transportation systems needs to be sponsored because of its potentials to create thousands of jobs for our citizens, provide farmers with better access to cities and a better living condition for Nigerians.

Keywords: intelligent, transport, system, Nigeria

Procedia PDF Downloads 115
18 Raman Spectral Fingerprints of Healthy and Cancerous Human Colorectal Tissues

Authors: Maria Karnachoriti, Ellas Spyratou, Dimitrios Lykidis, Maria Lambropoulou, Yiannis S. Raptis, Ioannis Seimenis, Efstathios P. Efstathopoulos, Athanassios G. Kontos

Abstract:

Colorectal cancer is the third most common cancer diagnosed in Europe, according to the latest incidence data provided by the World Health Organization (WHO), and early diagnosis has proved to be the key in reducing cancer-related mortality. In cases where surgical interventions are required for cancer treatment, the accurate discrimination between healthy and cancerous tissues is critical for the postoperative care of the patient. The current study focuses on the ex vivo handling of surgically excised colorectal specimens and the acquisition of their spectral fingerprints using Raman spectroscopy. Acquired data were analyzed in an effort to discriminate, in microscopic scale, between healthy and malignant margins. Raman spectroscopy is a spectroscopic technique with high detection sensitivity and spatial resolution of few micrometers. The spectral fingerprint which is produced during laser-tissue interaction is unique and characterizes the biostructure and its inflammatory or cancer state. Numerous published studies have demonstrated the potential of the technique as a tool for the discrimination between healthy and malignant tissues/cells either ex vivo or in vivo. However, the handling of the excised human specimens and the Raman measurement conditions remain challenging, unavoidably affecting measurement reliability and repeatability, as well as the technique’s overall accuracy and sensitivity. Therefore, tissue handling has to be optimized and standardized to ensure preservation of cell integrity and hydration level. Various strategies have been implemented in the past, including the use of balanced salt solutions, small humidifiers or pump-reservoir-pipette systems. In the current study, human colorectal specimens of 10X5 mm were collected from 5 patients up to now who underwent open surgery for colorectal cancer. A novel, non-toxic zinc-based fixative (Z7) was used for tissue preservation. Z7 demonstrates excellent protein preservation and protection against tissue autolysis. Micro-Raman spectra were recorded with a Renishaw Invia spectrometer from successive random 2 micrometers spots upon excitation at 785 nm to decrease fluorescent background and secure avoidance of tissue photodegradation. A temperature-controlled approach was adopted to stabilize the tissue at 2 °C, thus minimizing dehydration effects and consequent focus drift during measurement. A broad spectral range, 500-3200 cm-1,was covered with five consecutive full scans that lasted for 20 minutes in total. The average spectra were used for least square fitting analysis of the Raman modes.Subtle Raman differences were observed between normal and cancerous colorectal tissues mainly in the intensities of the 1556 cm-1 and 1628 cm-1 Raman modes which correspond to v(C=C) vibrations in porphyrins, as well as in the range of 2800-3000 cm-1 due to CH2 stretching of lipids and CH3 stretching of proteins. Raman spectra evaluation was supported by histological findings from twin specimens. This study demonstrates that Raman spectroscopy may constitute a promising tool for real-time verification of clear margins in colorectal cancer open surgery.

Keywords: colorectal cancer, Raman spectroscopy, malignant margins, spectral fingerprints

Procedia PDF Downloads 90
17 Use of computer and peripherals in the Archaeological Surveys of Sistan in Eastern Iran

Authors: Mahyar Mehrafarin, Reza Mehrafarin

Abstract:

The Sistan region in eastern Iran is a significant archaeological area in Iran and the Middle East, encompassing 10,000 square kilometers. Previous archeological field surveys have identified 1662 ancient sites dating from prehistoric periods to the Islamic period. Research Aim: This article aims to explore the utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, and the benefits derived from their implementation. Methodology: The research employs a descriptive-analytical approach combined with field methods. New technologies and software, such as GPS, drones, magnetometers, equipped cameras, satellite images, and software programs like GIS, Map source, and Excel, were utilized to collect information and analyze data. Findings: The use of modern technologies and computers in archaeological field surveys proved to be essential. Traditional archaeological activities, such as excavation and field surveys, are time-consuming and costly. Employing modern technologies helps in preserving ancient sites, accurately recording archaeological data, reducing errors and mistakes, and facilitating correct and accurate analysis. Creating a comprehensive and accessible database, generating statistics, and producing graphic designs and diagrams are additional advantages derived from the use of efficient technologies in archaeology. Theoretical Importance: The integration of computers and modern technologies in archaeology contributes to interdisciplinary collaborations and facilitates the involvement of specialists from various fields, such as geography, history, art history, anthropology, laboratory sciences, and computer engineering. The utilization of computers in archaeology spanned across diverse areas, including database creation, statistical analysis, graphics implementation, laboratory and engineering applications, and even artificial intelligence, which remains an unexplored area in Iranian archaeology. Data Collection and Analysis Procedures: Information was collected using modern technologies and software, capturing geographic coordinates, aerial images, archeogeophysical data, and satellite images. This data was then inputted into various software programs for analysis, including GIS, Map source, and Excel. The research employed both descriptive and analytical methods to present findings effectively. Question Addressed: The primary question addressed in this research is how the use of modern technologies and computers in archeological field surveys in Sistan, Iran, can enhance archaeological data collection, preservation, analysis, and accessibility. Conclusion: The utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, has proven to be necessary and beneficial. These technologies aid in preserving ancient sites, accurately recording archaeological data, reducing errors, and facilitating comprehensive analysis. The creation of accessible databases, statistics generation, graphic designs, and interdisciplinary collaborations are further advantages observed. It is recommended to explore the potential of artificial intelligence in Iranian archaeology as an unexplored area. The research has implications for cultural heritage organizations, archaeology students, and universities involved in archaeological field surveys in Sistan and Baluchistan province. Additionally, it contributes to enhancing the understanding and preservation of Iran's archaeological heritage.

Keywords: archaeological surveys, computer use, iran, modern technologies, sistan

Procedia PDF Downloads 77
16 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus

Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert

Abstract:

Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.

Keywords: building information modeling, digital terrain model, existing buildings, interoperability

Procedia PDF Downloads 110
15 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms

Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli

Abstract:

Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.

Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning

Procedia PDF Downloads 72
14 A Descriptive Study on Water Scarcity as a One Health Challenge among the Osiram Community, Kajiado County, Kenya

Authors: Damiano Omari, Topirian Kerempe, Dibo Sama, Walter Wafula, Sharon Chepkoech, Chrispine Juma, Gilbert Kirui, Simon Mburu, Susan Keino

Abstract:

The One Health concept was officially adopted by the international organizations and scholarly bodies in 1984. It aims at combining human, animal and environmental components to address global health challenges. Using collaborative efforts optimal health to people, animals, and the environment can be achieved. One health approach plays a significant approach role in prevention and control of zoonosis diseases. It has also been noted that 75% of new emerging human infectious diseases are zoonotic. In Kenya, one health has been embraced and strongly advocated for by One Health East and Central Africa (OHCEA). It was inaugurated on 17th of October 2010 at a historic meeting facilitated by USAID with participants from 7 public health schools, seven faculties of veterinary medicine in Eastern Africa and 2 American universities (Tufts and University of Minnesota) in addition to respond project staff. The study was conducted in Loitoktok Sub County, specifically in the Amboseli Ecosystem. The Amboseli ecosystem covers an area of 5,700 square kilometers and stretches between Mt. Kilimanjaro, Chyulu Hills, Tsavo West National park and the Kenya/Tanzania border. The area is arid to semi-arid and is more suitable for pastoralism with a high potential for conservation of wildlife and tourism enterprises. The ecosystem consists of the Amboseli National Park, which is surrounded by six group ranches which include Kimana, Olgulului, Selengei, Mbirikani, Kuku and Rombo in Loitoktok District. The Manyatta of study was Osiram Cultural Manyatta in Mbirikani group ranch. Apart from visiting the Manyatta, we also visited the sub-county hospital, slaughter slab, forest service, Kimana market, and the Amboseli National Park. The aim of the study was to identify the one health issues facing the community. This was done by a conducting a community needs assessment and prioritization. Different methods were used in data collection for the qualitative and numerical data. They include among others; key informant interviews and focus group discussions. We also guided the community members in drawing their Resource Map this helped identify the major resources in their land and also help them identify some of the issues they were facing. Matrix piling, root cause analysis, and force field analysis tools were used to establish the one health related priority issues facing community members. Skits were also used to present to the community interventions to the major one health issues. Some of the prioritized needs among the community were water scarcity and inadequate markets for their beadwork. The group intervened on the various needs of the Manyatta. For water scarcity, we educated the community on water harvesting methods using gutters as well as proper storage by the use of tanks and earth dams. The community was also encouraged to recycle and conserve water. To improve markets; we educated the community to upload their products online, a page was opened for them and uploading the photos was demonstrated to them. They were also encouraged to be innovative to attract more clients.

Keywords: Amboseli ecosystem, community interventions, community needs assessment and prioritization, one health issues

Procedia PDF Downloads 168
13 Modelling Farmer’s Perception and Intention to Join Cashew Marketing Cooperatives: An Expanded Version of the Theory of Planned Behaviour

Authors: Gospel Iyioku, Jana Mazancova, Jiri Hejkrlik

Abstract:

The “Agricultural Promotion Policy (2016–2020)” represents a strategic initiative by the Nigerian government to address domestic food shortages and the challenges in exporting products at the required quality standards. Hindered by an inefficient system for setting and enforcing food quality standards, coupled with a lack of market knowledge, the Federal Ministry of Agriculture and Rural Development (FMARD) aims to enhance support for the production and activities of key crops like cashew. By collaborating with farmers, processors, investors, and stakeholders in the cashew sector, the policy seeks to define and uphold high-quality standards across the cashew value chain. Given the challenges and opportunities faced by Nigerian cashew farmers, active participation in cashew marketing groups becomes imperative. These groups serve as essential platforms for farmers to collectively navigate market intricacies, access resources, share knowledge, improve output quality, and bolster their overall bargaining power. Through engagement in these cooperative initiatives, farmers not only boost their economic prospects but can also contribute significantly to the sustainable growth of the cashew industry, fostering resilience and community development. This study explores the perceptions and intentions of farmers regarding their involvement in cashew marketing cooperatives, utilizing an expanded version of the Theory of Planned Behaviour. Drawing insights from a diverse sample of 321 cashew farmers in Southwest Nigeria, the research sheds light on the factors influencing decision-making in cooperative participation. The demographic analysis reveals a diverse landscape, with a substantial presence of middle-aged individuals contributing significantly to the agricultural sector and cashew-related activities emerging as a primary income source for a substantial proportion (23.99%). Employing Structural Equation Modelling (SEM) with Maximum Likelihood Robust (MLR) estimation in R, the research elucidates the associations among latent variables. Despite the model’s complexity, the goodness-of-fit indices attest to the validity of the structural model, explaining approximately 40% of the variance in the intention to join cooperatives. Moral norms emerge as a pivotal construct, highlighting the profound influence of ethical considerations in decision-making processes, while perceived behavioural control presents potential challenges in active participation. Attitudes toward joining cooperatives reveal nuanced perspectives, with strong beliefs in enhanced connections with other farmers but varying perceptions on improved access to essential information. The SEM analysis establishes positive and significant effects of moral norms, perceived behavioural control, subjective norms, and attitudes on farmers’ intention to join cooperatives. The knowledge construct positively affects key factors influencing intention, emphasizing the importance of informed decision-making. A supplementary analysis using partial least squares (PLS) SEM corroborates the robustness of our findings, aligning with covariance-based SEM results. This research unveils the determinants of cooperative participation and provides valuable insights for policymakers and practitioners aiming to empower and support this vital demographic in the cashew industry.

Keywords: marketing cooperatives, theory of planned behaviour, structural equation modelling, cashew farmers

Procedia PDF Downloads 84
12 Pisolite Type Azurite/Malachite Ore in Sandstones at the Base of the Miocene in Northern Sardinia: The Authigenic Hypothesis

Authors: S. Fadda, M. Fiori, C. Matzuzzi

Abstract:

Mineralized formations in the bottom sediments of a Miocene transgression have been discovered in Sardinia. The mineral assemblage consists of copper sulphides and oxidates suggesting fluctuations of redox conditions in neutral to high-pH restricted shallow-water coastal basins. Azurite/malachite has been observed as authigenic and occurs as loose spheroidal crystalline particles associated with the transitional-littoral horizon forming the bottom of the marine transgression. Many field observations are consistent with a supergenic circulation of metals involving terrestrial groundwater-seawater mixing. Both clastic materials and metals come from Tertiary volcanic edifices while the main precipitating anions, carbonates, and sulphides species are of both continental and marine origin. Formation of Cu carbonates as a supergene secondary 'oxide' assemblage, does not agree with field evidences, petrographic observations along with textural evidences in the host-rock types. Samples were collected along the sedimentary sequence for different analyses: the majority of elements were determined by X-ray fluorescence and plasma-atomic emission spectroscopy. Mineral identification was obtained by X-ray diffractometry and scanning electron microprobe. Thin sections of the samples were examined in microscopy while porosity measurements were made using a mercury intrusion porosimeter. Cu-carbonates deposited at a temperature below 100 C° which is consistent with the clay minerals in the matrix of the host rock dominated by illite and montmorillonite. Azurite nodules grew during the early diagenetic stage through reaction of cupriferous solutions with CO₂ imported from the overlying groundwater and circulating through the sandstones during shallow burial. Decomposition of organic matter in the bottom anoxic waters released additional carbon dioxide to pore fluids for azurite stability. In this manner localized reducing environments were also generated in which Cu was fixed as Cu-sulphide and sulphosalts. Microscopic examinations of textural features of azurite nodules give evidence of primary malachite/azurite deposition rather than supergene oxidation in place of primary sulfides. Photomicrographs show nuclei of azurite and malachite surrounded by newly formed microcrystalline carbonates which constitute the matrix. The typical pleochroism of crystals can be observed also when this mineral fills microscopic fissures or cracks. Sedimentological evidence of transgression and regression indicates that the pore water would have been a variable mixture of marine water and groundwaters with a possible meteoric component in an alternatively exposed and subaqueous environment owing to water-level fluctuation. Salinity data of the pore fluids, assessed at random intervals along the mineralised strata confirmed the values between about 7000 and 30,000 ppm measured in coeval sediments at the base of Miocene falling in the range of a more or less diluted sea water. This suggests a variation in mean pore-fluids pH between 5.5 and 8.5, compatible with the oxidized and reduced mineral paragenesis described in this work. The results of stable isotopes studies reflect the marine transgressive-regressive cyclicity of events and are compatibile with carbon derivation from sea water. During the last oxidative stage of diagenesis, under surface conditions of higher activity of H₂O and O₂, CO₂ partial pressure decreased, and malachite becomes the stable Cu mineral. The potential for these small but high grade deposits does exist.

Keywords: sedimentary, Cu-carbonates, authigenic, tertiary, Sardinia

Procedia PDF Downloads 130
11 Circular Nitrogen Removal, Recovery and Reuse Technologies

Authors: Lina Wu

Abstract:

The excessive discharge of nitrogen in sewage greatly intensifies the eutrophication of water bodies and threatens water quality. Nitrogen pollution control has become a global concern. The concentration of nitrogen in water is reduced by converting ammonia nitrogen, nitrate nitrogen and nitrite nitrogen into nitrogen-containing gas through biological treatment, physicochemical treatment and oxidation technology. However, some wastewater containing high ammonia nitrogen including landfill leachate, is difficult to be treated by traditional nitrification and denitrification because of its high COD content. The core process of denitrification is that denitrifying bacteria convert nitrous acid produced by nitrification into nitrite under anaerobic conditions. Still, its low-carbon nitrogen does not meet the conditions for denitrification. Many studies have shown that the natural autotrophic anammox bacteria can combine nitrous and ammonia nitrogen without a carbon source through functional genes to achieve total nitrogen removal, which is very suitable for removing nitrogen from leachate. In addition, the process also saves a lot of aeration energy consumption than the traditional nitrogen removal process. Therefore, anammox plays an important role in nitrogen conversion and energy saving. The short-range nitrification and denitrification coupled with anaerobic ammoX ensures total nitrogen removal. It improves the removal efficiency, meeting the needs of society for an ecologically friendly and cost-effective nutrient removal treatment technology. In recent years, research has found that the symbiotic system has more water treatment advantages because this process not only helps to improve the efficiency of wastewater treatment but also allows carbon dioxide reduction and resource recovery. Microalgae use carbon dioxide dissolved in water or released through bacterial respiration to produce oxygen for bacteria through photosynthesis under light, and bacteria, in turn, provide metabolites and inorganic carbon sources for the growth of microalgae, which may lead the algal bacteria symbiotic system save most or all of the aeration energy consumption. It has become a trend to make microalgae and light-avoiding anammox bacteria play synergistic roles by adjusting the light-to-dark ratio. Microalgae in the outer layer of light particles block most of the light and provide cofactors and amino acids to promote nitrogen removal. In particular, myxoccota MYX1 can degrade extracellular proteins produced by microalgae, providing amino acids for the entire bacterial community, which helps anammox bacteria save metabolic energy and adapt to light. As a result, initiating and maintaining the process of combining dominant algae and anaerobic denitrifying bacterial communities has great potential in treating landfill leachate. Chlorella has a brilliant removal effect and can withstand extreme environments in terms of high ammonia nitrogen, high salt and low temperature. It is urgent to study whether the algal mud mixture rich in denitrifying bacteria and chlorella can greatly improve the efficiency of landfill leachate treatment under an anaerobic environment where photosynthesis is stopped. The optimal dilution concentration of simulated landfill leachate can be found by determining the treatment effect of the same batch of bacteria and algae mixtures under different initial ammonia nitrogen concentrations and making a comparison. High-throughput sequencing technology was used to analyze the changes in microbial diversity, related functional genera and functional genes under optimal conditions, providing a theoretical and practical basis for the engineering application of novel bacteria-algae symbiosis system in biogas slurry treatment and resource utilization.

Keywords: nutrient removal and recovery, leachate, anammox, Partial nitrification, Algae-bacteria interaction

Procedia PDF Downloads 38
10 Tailoring Workspaces for Generation Z: Harmonizing Teamwork, Privacy, and Connectivity

Authors: Maayan Nakash

Abstract:

The modern workplace is undergoing a revolution, with Generation Z (Gen-Z) at the forefront of this transformative shift. However, empirical investigations specifically targeting the workplace preferences of this generation remain limited. Through direct examination of their tendencies via a survey approach, this study offers vital insights for aligning organizational policies and practices. The results presented in this paper are part of a comprehensive study that explored Gen Z's viewpoints on various employment market aspects, likely to decisively influence the design of future work environments. Data were collected via an online survey distributed among a cohort of 461 individuals from Gen-Z, born between the mid-1990s and 2010, consisting of 241 males (52.28%) and 220 females (47.72%). Responses were gauged using Likert scale statements that probed preferences for teamwork versus individual work, virtual versus personal interactions, and open versus private workspaces. Descriptive statistics and analytical analyses were conducted to pinpoint key patterns. We discovered that a high proportion of respondents (81.99%, n=378) exhibited a preference for teamwork over individual work. Correspondingly, the data indicate strong support for the recognition of team-based tasks as a tool contributing to personal and professional development. In terms of communication, the majority of respondents (61.38%) either disagreed (n=154) or slightly agreed (n=129) with the exclusive reliance on virtual interactions with their organizational peers. This finding underscores that despite technological progress, digital natives place significant value on physical interaction and non-mediated communication. Moreover, we understand that they also value a quiet and private work environment, clearly preferring it over open and shared workspaces. Considering that Gen-Z does not necessarily experience high levels of stress within social frameworks in the workplace, this can be attributed to a desire for a space that allows for focused engagement with work tasks. A One-Sample Chi-Square Test was performed on the observed distribution of respondents' reactions to each examined statement. The results showed statistically significant deviations from a uniform distribution (p<.001), indicating that the response patterns did not occur by chance and that there were meaningful tendencies in the participants' responses. The findings expand the theoretical knowledge base on human resources in the dynamics of a multi-generational workforce, illuminating the values, approaches, and expectations of Gen-Z. Practically, the results may lead organizations to equip themselves with tools to create policies tailored to Gen-Z in the context of workspaces and social needs, which could potentially foster a fertile environment and aid in attracting and retaining young talent. Future studies might include investigating potential mitigating factors, such as cultural influences or individual personality traits, which could further clarify the nuances in Gen-Z's work style preferences. Longitudinal studies tracking changes in these preferences as the generation matures may also yield valuable insights. Ultimately, as the landscape of the workforce continues to evolve, ongoing investigations into the unique characteristics and aspirations of emerging generations remain essential for nurturing harmonious, productive, and future-ready organizational environments.

Keywords: workplace, future of work, generation Z, digital natives, human resources management

Procedia PDF Downloads 51
9 Next-Generation Lunar and Martian Laser Retro-Reflectors

Authors: Simone Dell'Agnello

Abstract:

There are laser retroreflectors on the Moon and no laser retroreflectors on Mars. Here we describe the design, construction, qualification and imminent deployment of next-generation, optimized laser retroreflectors on the Moon and on Mars (where they will be the first ones). These instruments are positioned by time-of-flight measurements of short laser pulses, the so-called 'laser ranging' technique. Data analysis is carried out with PEP, the Planetary Ephemeris Program of CfA (Center for Astrophysics). Since 1969 Lunar Laser Ranging (LLR) to Apollo/Lunokhod laser retro-reflector (CCR) arrays supplied accurate tests of General Relativity (GR) and new gravitational physics: possible changes of the gravitational constant Gdot/G, weak and strong equivalence principle, gravitational self-energy (Parametrized Post Newtonian parameter beta), geodetic precession, inverse-square force-law; it can also constraint gravitomagnetism. Some of these measurements also allowed for testing extensions of GR, including spacetime torsion, non-minimally coupled gravity. LLR has also provides significant information on the composition of the deep interior of the Moon. In fact, LLR first provided evidence of the existence of a fluid component of the deep lunar interior. In 1969 CCR arrays contributed a negligible fraction of the LLR error budget. Since laser station range accuracy improved by more than a factor 100, now, because of lunar librations, current array dominate the error due to their multi-CCR geometry. We developed a next-generation, single, large CCR, MoonLIGHT (Moon Laser Instrumentation for General relativity high-accuracy test) unaffected by librations that supports an improvement of the space segment of the LLR accuracy up to a factor 100. INFN also developed INRRI (INstrument for landing-Roving laser Retro-reflector Investigations), a microreflector to be laser-ranged by orbiters. Their performance is characterized at the SCF_Lab (Satellite/lunar laser ranging Characterization Facilities Lab, INFN-LNF, Frascati, Italy) for their deployment on the lunar surface or the cislunar space. They will be used to accurately position landers, rovers, hoppers, orbiters of Google Lunar X Prize and space agency missions, thanks to LLR observations from station of the International Laser Ranging Service in the USA, in France and in Italy. INRRI was launched in 2016 with the ESA mission ExoMars (Exobiology on Mars) EDM (Entry, descent and landing Demonstration Module), deployed on the Schiaparelli lander and is proposed for the ExoMars 2020 Rover. Based on an agreement between NASA and ASI (Agenzia Spaziale Italiana), another microreflector, LaRRI (Laser Retro-Reflector for InSight), was delivered to JPL (Jet Propulsion Laboratory) and integrated on NASA’s InSight Mars Lander in August 2017 (launch scheduled in May 2018). Another microreflector, LaRA (Laser Retro-reflector Array) will be delivered to JPL for deployment on the NASA Mars 2020 Rover. The first lunar landing opportunities will be from early 2018 (with TeamIndus) to late 2018 with commercial missions, followed by opportunities with space agency missions, including the proposed deployment of MoonLIGHT and INRRI on NASA’s Resource Prospectors and its evolutions. In conclusion, we will extend significantly the CCR Lunar Geophysical Network and populate the Mars Geophysical Network. These networks will enable very significantly improved tests of GR.

Keywords: general relativity, laser retroreflectors, lunar laser ranging, Mars geodesy

Procedia PDF Downloads 269
8 The Distribution of Prevalent Supplemental Nutrition Assistance Program-Authorized Food Store Formats Differ by U.S. Region and Rurality: Implications for Food Access and Obesity Linkages

Authors: Bailey Houghtaling, Elena Serrano, Vivica Kraak, Samantha Harden, George Davis, Sarah Misyak

Abstract:

United States (U.S.) Department of Agriculture Supplemental Nutrition Assistance Program (SNAP) participants are low-income Americans receiving federal dollars for supplemental food and beverage purchases. Participants use a variety of (traditional/non-traditional) SNAP-authorized stores for household dietary purchases - also representing food access points for all Americans. Importantly consumers' food and beverage purchases from non-traditional store formats tend to be higher in saturated fats, added sugars, and sodium when compared to purchases from traditional (e.g., grocery/supermarket) formats. Overconsumption of energy-dense and low-nutrient food and beverage products contribute to high obesity rates and adverse health outcomes that differ in severity among urban/rural U.S. locations and high/low-income populations. Little is known about the SNAP-authorized food store format landscape nationally, regionally, or by urban-rural status, as traditional formats are currently used as the gold standard in food access research. This research utilized publicly available U.S. databases to fill this large literature gap and to provide insight into modes of food access for vulnerable U.S. populations: (1) SNAP Retailer Locator which provides a list of all authorized food stores in the U.S., and; (2) Rural-Urban Continuum Codes (RUCC) that categorize U.S. counties as urban (RUCC 1-3) or rural (RUCC 4-9). Frequencies were determined for the highest occurring food store formats nationally and within two regionally diverse U.S. states – Virginia in the east and California in the west. Store format codes were assigned (e.g., grocery, drug, convenience, mass merchandiser, supercenter, dollar, club, or other). RUCC was applied to investigate state-level differences in urbanity-rurality regarding prevalent food store formats and Chi Square test of independence was used to determine if food store format distributions significantly (p < 0.05) differed by region or rurality. The resulting research sample that represented highly prevalent SNAP-authorized food stores nationally included 41.25% of all SNAP stores in the U.S. (N=257,839), comprised primarily of convenience formats (31.94%) followed by dollar (25.58%), drug (19.24%), traditional (10.87%), supercenter (6.85%), mass merchandiser (1.62%), non-food store or restaurant (1.81%), and club formats (1.09%). Results also indicated that the distribution of prevalent SNAP-authorized formats significantly differed by state. California had a lower proportion of traditional (9.96%) and a higher proportion of drug (28.92%) formats than Virginia- 11.55% and 19.97%, respectively (p < 0.001). Virginia also had a higher proportion of dollar formats (26.11%) when compared to California (10.64%) (p < 0.001). Significant differences were also observed for rurality variables (p < 0.001). Prominently, rural Virginia had a significantly higher proportion of dollar formats (41.71%) when compared to urban Virginia (21.78%) and rural California (21.21%). Non-traditional SNAP-authorized formats are highly prevalent and significantly differ in distribution by U.S. region and rurality. The largest proportional difference was observed for dollar formats where the least nutritious consumer purchases are documented in the literature. Researchers/practitioners should investigate non-traditional food stores at the local level using these research findings and similar applied methodologies to determine how access to various store formats impact obesity prevalence. For example, dollar stores may be prime targets for interventions to enhance nutritious consumer purchases in rural Virginia while targeting drug formats in California may be more appropriate.

Keywords: food access, food store format, nutrition interventions, SNAP consumers

Procedia PDF Downloads 139
7 Contactless Heart Rate Measurement System based on FMCW Radar and LSTM for Automotive Applications

Authors: Asma Omri, Iheb Sifaoui, Sofiane Sayahi, Hichem Besbes

Abstract:

Future vehicle systems demand advanced capabilities, notably in-cabin life detection and driver monitoring systems, with a particular emphasis on drowsiness detection. To meet these requirements, several techniques employ artificial intelligence methods based on real-time vital sign measurements. In parallel, Frequency-Modulated Continuous-Wave (FMCW) radar technology has garnered considerable attention in the domains of healthcare and biomedical engineering for non-invasive vital sign monitoring. FMCW radar offers a multitude of advantages, including its non-intrusive nature, continuous monitoring capacity, and its ability to penetrate through clothing. In this paper, we propose a system utilizing the AWR6843AOP radar from Texas Instruments (TI) to extract precise vital sign information. The radar allows us to estimate Ballistocardiogram (BCG) signals, which capture the mechanical movements of the body, particularly the ballistic forces generated by heartbeats and respiration. These signals are rich sources of information about the cardiac cycle, rendering them suitable for heart rate estimation. The process begins with real-time subject positioning, followed by clutter removal, computation of Doppler phase differences, and the use of various filtering methods to accurately capture subtle physiological movements. To address the challenges associated with FMCW radar-based vital sign monitoring, including motion artifacts due to subjects' movement or radar micro-vibrations, Long Short-Term Memory (LSTM) networks are implemented. LSTM's adaptability to different heart rate patterns and ability to handle real-time data make it suitable for continuous monitoring applications. Several crucial steps were taken, including feature extraction (involving amplitude, time intervals, and signal morphology), sequence modeling, heart rate estimation through the analysis of detected cardiac cycles and their temporal relationships, and performance evaluation using metrics such as Root Mean Square Error (RMSE) and correlation with reference heart rate measurements. For dataset construction and LSTM training, a comprehensive data collection system was established, integrating the AWR6843AOP radar, a Heart Rate Belt, and a smart watch for ground truth measurements. Rigorous synchronization of these devices ensured data accuracy. Twenty participants engaged in various scenarios, encompassing indoor and real-world conditions within a moving vehicle equipped with the radar system. Static and dynamic subject’s conditions were considered. The heart rate estimation through LSTM outperforms traditional signal processing techniques that rely on filtering, Fast Fourier Transform (FFT), and thresholding. It delivers an average accuracy of approximately 91% with an RMSE of 1.01 beat per minute (bpm). In conclusion, this paper underscores the promising potential of FMCW radar technology integrated with artificial intelligence algorithms in the context of automotive applications. This innovation not only enhances road safety but also paves the way for its integration into the automotive ecosystem to improve driver well-being and overall vehicular safety.

Keywords: ballistocardiogram, FMCW Radar, vital sign monitoring, LSTM

Procedia PDF Downloads 72
6 Language Anxiety and Learner Achievement among University Undergraduates in Sri Lanka: A Case Study of University of Sri Jayewardenepura

Authors: Sujeeva Sebastian Pereira

Abstract:

Language Anxiety (LA) – a distinct psychological construct of self-perceptions and behaviors related to classroom language learning – is perceived as a significant variable highly correlated with Second Language Acquisition (SLA). However, the existing scholarship has inadequately explored the nuances of LA in relation to South Asia, especially in terms of Sri Lankan higher education contexts. Thus, the current study, situated within the broad areas of Psychology of SLA and Applied Linguistics, investigates the impact of competency-based LA and identity-based LA on learner achievement among undergraduates of Sri Lanka. Employing a case study approach to explore the impact of LA, 750 undergraduates of the University of Sri Jayewardenepura, Sri Lanka, thus covering 25% of the student population from all seven faculties of the university, were selected as participants using stratified proportionate sampling in terms of ethnicity, gender, and disciplines. The qualitative and quantitative research inquiry utilized for data collection include a questionnaire consisting a set of structured and unstructured questions, and semi-structured interviews as research instruments. Data analysis includes both descriptive and statistical measures. As per the quantitative measures of data analysis, the study employed Pearson Correlation Coefficient test, Chi-Square test, and Multiple Correspondence Analysis; it used LA as the dependent variable, and two types of independent variables were used: direct and indirect variables. Direct variables encompass the four main language skills- reading, writing, speaking and listening- and test anxiety. These variables were further explored through classroom activities on grammar, vocabulary and individual and group presentations. Indirect variables are identity, gender and cultural stereotypes, discipline, social background, income level, ethnicity, religion and parents’ education level. Learner achievement was measured through final scores the participants have obtained for Compulsory English- a common first-year course unit mandatory for all undergraduates. LA was measured using the FLCAS. In order to increase the validity and reliability of the study, data collected were triangulated through descriptive content analysis. Clearly evident through both the statistical analysis and the qualitative analysis of the results is the significant linear negative correlation between LA and learner achievement, and the significant negative correlation between LA and culturally-operated gender stereotypes which create identity disparities in learners. The study also found that both competency-based LA and identity-based LA are experienced primarily and inescapably due to the apprehensions regarding speaking in English. Most participants who reported high levels of LA were from an urban socio-economic background of lower income families. Findings exemplify the linguistic inequality prevalent in the socio-cultural milieu in Sri Lankan society. This inequality makes learning English a dire need, yet, very much an anxiety provoking process because of many sociolinguistic, cultural and ideological factors related to English as a Second Language (ESL) in Sri Lanka. The findings bring out the intricate interrelatedness of both the dependent variable (LA) and the independent variables stated above, emphasizing that the significant linear negative correlation between LA and learner achievement is connected to the affective, cognitive and sociolinguistic domains of SLA. Thus, the study highlights the promise in linguistic practices such as code-switching, crossing and accommodating hybrid identities as strategies in minimizing LA and maximizing the experience of ESL.

Keywords: language anxiety, identity-based anxiety, competence-based anxiety, TESL, Sri Lanka

Procedia PDF Downloads 189