Search results for: measurement accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6151

Search results for: measurement accuracy

271 Tertiary Level Teachers' Beliefs about Codeswitching

Authors: Hoa Pham

Abstract:

Code switching, which can be described as the use of students’ first language in second language classrooms, has long been a controversial topic in the area of language teaching and second language acquisition. While this has been widely investigated across different contexts, little empirical research has been undertaken in Vietnam. The findings of this study contribute to our understanding of bilingual discourse and code switching practices in content and language integrated classrooms, which has significant implications for language teaching and learning in general and in particular for language pedagogy at tertiary level in Vietnam. This study examines the accounts the teachers articulated for their code switching practices in content-based Business English in Vietnam. Data were collected from five teachers through the use of stimulated recall interviews facilitated by the video data to garner the teachers' cognitive reflection, and allowed them to vocalise the motivations behind their code switching behaviour in particular contexts. The literature has recommended that when participants are provided with a large amount of stimuli or cues, they will experience an original situation again in their imagination with great accuracy. This technique can also provide a valuable "insider" perspective on the phenomenon under investigation which complements the researcher’s "outsider" observation. This can create a relaxed atmosphere during the interview process, which in turn promotes the collection of rich and diverse data. Also, participants can be empowered by this technique as they can raise their own concerns and discuss instances which they find important or interesting. The data generated through this study were analysed using a constant comparative approach. The study found that the teachers indicated their support for the use of code switching in their pedagogical practices. Particularly, as a pedagogical resource, the teachers saw code switching to the L1 playing a key role in facilitating the students' comprehension of both content knowledge and the target language. They believed the use of the L1 accommodates the students' current language competence and content knowledge. They also expressed positive opinions about the role that code switching plays in stimulating students' schematic language and content knowledge, encouraging retention and interest in learning and promoting a positive affective environment in the classroom. The teachers perceived that their use of code switching to the L1 helps them meet the students' language needs and prepares them for their study in subsequent courses and addresses functional needs so that students can cope with English language use outside the classroom. Several factors shaped the teachers' perceptions of their code switching practices, including their accumulated teaching experience, their previous experience as language learners, their theoretical understanding of language teaching and learning, and their knowledge of the teaching context. Code switching was a typical phenomenon in the observed classes and was supported by the teachers in certain contexts. This study reinforces the call in the literature to recognise this practice as a useful instructional resource.

Keywords: codeswitching, language teaching, teacher beliefs, tertiary level

Procedia PDF Downloads 455
270 A Case Study of Remote Location Viewing, and Its Significance in Mobile Learning

Authors: James Gallagher, Phillip Benachour

Abstract:

As location aware mobile technologies become ever more omnipresent, the prospect of exploiting their context awareness to enforce learning approaches thrives. Utilizing the growing acceptance of ubiquitous computing, and the steady progress both in accuracy and battery usage of pervasive devices, we present a case study of remote location viewing, how the application can be utilized to support mobile learning in situ using an existing scenario. Through the case study we introduce a new innovative application: Mobipeek based around a request/response protocol for the viewing of a remote location and explore how this can apply both as part of a teacher lead activity and informal learning situations. The system developed allows a user to select a point on a map, and send a request. Users can attach messages alongside time and distance constraints. Users within the bounds of the request can respond with an image, and accompanying message, providing context to the response. This application can be used alongside a structured learning activity such as the use of mobile phone cameras outdoors as part of an interactive lesson. An example of a learning activity would be to collect photos in the wild about plants, vegetation, and foliage as part of a geography or environmental science lesson. Another example could be to take photos of architectural buildings and monuments as part of an architecture course. These images can be uploaded then displayed back in the classroom for students to share their experiences and compare their findings with their peers. This can help to fosters students’ active participation while helping students to understand lessons in a more interesting and effective way. Mobipeek could augment the student learning experience by providing further interaction with other peers in a remote location. The activity can be part of a wider study between schools in different areas of the country enabling the sharing and interaction between more participants. Remote location viewing can be used to access images in a specific location. The choice of location will depend on the activity and lesson. For example architectural buildings of a specific period can be shared between two or more cities. The augmentation of the learning experience can be manifested in the different contextual and cultural influences as well as the sharing of images from different locations. In addition to the implementation of Mobipeek, we strive to analyse this application, and a subset of other possible and further solutions targeted towards making learning more engaging. Consideration is given to the benefits of such a system, privacy concerns, and feasibility of widespread usage. We also propose elements of “gamification”, in an attempt to further the engagement derived from such a tool and encourage usage. We conclude by identifying limitations, both from a technical, and a mobile learning perspective.

Keywords: context aware, location aware, mobile learning, remote viewing

Procedia PDF Downloads 294
269 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack

Authors: Varun Agarwal

Abstract:

Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.

Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images

Procedia PDF Downloads 136
268 ePAM: Advancing Sustainable Mobility through Digital Parking, AI-Driven Vehicle Recognition, and CO₂ Reporting

Authors: Robert Monsberger

Abstract:

The increasing scarcity of resources and the pressing challenge of climate change demand transformative technological, economic, and societal approaches. In alignment with the European Green Deal's goal to achieve net-zero greenhouse gas emissions by 2050, this paper presents the development and implementation of an electronic parking and mobility system (ePAM). This system offers a distinct, integrated solution aimed at promoting climate-positive mobility, reducing individual vehicle use, and advancing the digital transformation of off-street parking. The core objectives include the accurate recognition of electric vehicles and occupant counts using advanced camera-based systems, achieving a very high accuracy. This capability enables the dynamic categorization and classification of vehicles to provide fair and automated tariff adjustments. The study also seeks to replace physical barriers with virtual ‘digital gates’ using augmented reality, significantly improving user acceptance as shown in studies conducted. The system is designed to operate as an end-to-end software solution, enabling a fully digital and paperless parking management system by leveraging license plate recognition (LPR) and metadata processing. By eliminating physical infrastructure like gates and terminals, the system significantly reduces resource consumption, maintenance complexity, and operational costs while enhancing energy efficiency. The platform also integrates CO₂ reporting tools to support compliance with upcoming EU emission trading schemes and to incentivize eco-friendly transportation behaviors. By fostering the adoption of electric vehicles and ride-sharing models, the system contributes to the optimization of traffic flows and the minimization of search traffic in urban centers. The platform's open data interfaces enable seamless integration into multimodal transport systems, facilitating a transition from individual to public transportation modes. This study emphasizes sustainability, data privacy, and compliance with the AI Act, aiming to achieve a market share of at least 4.5% in the DACH region by 2030. ePAM sets a benchmark for innovative mobility solutions, driving significant progress toward climate-neutral urban mobility.

Keywords: sustainable mobility, digital parking, AI-driven vehicle recognition, license plate recognition, virtual gates, multimodal transport integration

Procedia PDF Downloads 10
267 The Role of Serum Fructosamine as a Monitoring Tool in Gestational Diabetes Mellitus Treatment in Vietnam

Authors: Truong H. Le, Ngoc M. To, Quang N. Tran, Luu T. Cao, Chi V. Le

Abstract:

Introduction: In Vietnam, the current monitoring and treatment for ordinary diabetic patient mostly based on glucose monitoring with HbA1c test for every three months (recommended goal is HbA1c < 6.5%~7%). For diabetes in pregnant women or Gestational diabetes mellitus (GDM), glycemic control until the time of delivery is extremly important because it could reduce significantly medical implications for both the mother and the child. Besides, GDM requires continuos glucose monitoring at least every two weeks and therefore an alternative marker of glycemia for short-term control is considering a potential tool for the healthcare providers. There are published studies have indicated that the glycosylated serum protein is a better indicator than glycosylated hemoglobin in GDM monitoring. Based on the actual practice in Vietnam, this study was designed to evaluate the role of serum fructosamine as a monitoring tool in GDM treament and its correlations with fasting blood glucose (G0), 2-hour postprandial glucose (G2) and glycosylated hemoglobin (HbA1c). Methods: A cohort study on pregnant women diagnosed with GDM by the 75-gram oralglucose tolerance test was conducted at Endocrinology Department, Cho Ray hospital, Vietnam from June 2014 to March 2015. Cho Ray hospital is the final destination for GDM patient in the southern of Vietnam, the study population has many sources from other pronvinces and therefore researchers belive that this demographic characteristic can help to provide the study result as a reflection for the whole area. In this study, diabetic patients received a continuos glucose monitoring method which consists of bi-weekly on-site visit every 2 weeks with glycosylated serum protein test, fasting blood glucose test and 2-hour postprandial glucose test; HbA1c test for every 3 months; and nutritious consultance for daily diet program. The subjects still received routine treatment at the hospital, with tight follow-up from their healthcare providers. Researchers recorded bi-weekly health conditions, serum fructosamine level and delivery outcome from the pregnant women, using Stata 13 programme for the analysis. Results: A total of 500 pregnant women was enrolled and follow-up in this study. Serum fructosamine level was found to have a light correlation with G0 ( r=0.3458, p < 0.001) and HbA1c ( r=0.3544, p < 0.001), and moderately correlated with G2 ( r=0.4379, p < 0.001). During study timeline, the delivery outcome of 287 women were recorded with the average age of 38.5 ± 1.5 weeks, 9% of them have macrosomia, 2.8% have premature birth before week 35th and 9.8% have premature birth before week 37th; 64.8% of cesarean section and none of them have perinatal or neonatal mortality. The study provides a reference interval of serum fructosamine for GDM patient was 112.9 ± 20.7 μmol/dL. Conclusion: The present results suggests that serum fructosamine is as effective as HbA1c as a reflection of blood glucose control in GDM patient, with a positive result in delivery outcome (0% perinatal or neonatal mortality). The reference value of serum fructosamine measurement provided a potential monitoring utility in GDM treatment for hospitals in Vietnam. Healthcare providers in Cho Ray hospital is considering to conduct more studies to test this reference as a target value in their GDM treatment and monitoring.

Keywords: gestational diabetes mellitus, monitoring tool, serum fructosamine, Vietnam

Procedia PDF Downloads 282
266 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 116
265 Design of a Low-Cost, Portable, Sensor Device for Longitudinal, At-Home Analysis of Gait and Balance

Authors: Claudia Norambuena, Myissa Weiss, Maria Ruiz Maya, Matthew Straley, Elijah Hammond, Benjamin Chesebrough, David Grow

Abstract:

The purpose of this project is to develop a low-cost, portable sensor device that can be used at home for long-term analysis of gait and balance abnormalities. One area of particular concern involves the asymmetries in movement and balance that can accompany certain types of injuries and/or the associated devices used in the repair and rehabilitation process (e.g. the use of splints and casts) which can often increase chances of falls and additional injuries. This device has the capacity to monitor a patient during the rehabilitation process after injury or operation, increasing the patient’s access to healthcare while decreasing the number of visits to the patient’s clinician. The sensor device may thereby improve the quality of the patient’s care, particularly in rural areas where access to the clinician could be limited, while simultaneously decreasing the overall cost associated with the patient’s care. The device consists of nine interconnected accelerometer/ gyroscope/compass chips (9-DOF IMU, Adafruit, New York, NY). The sensors attach to and are used to determine the orientation and acceleration of the patient’s lower abdomen, C7 vertebra (lower neck), L1 vertebra (middle back), anterior side of each thigh and tibia, and dorsal side of each foot. In addition, pressure sensors are embedded in shoe inserts with one sensor (ESS301, Tekscan, Boston, MA) beneath the heel and three sensors (Interlink 402, Interlink Electronics, Westlake Village, CA) beneath the metatarsal bones of each foot. These sensors measure the distribution of the weight applied to each foot as well as stride duration. A small microntroller (Arduino Mega, Arduino, Ivrea, Italy) is used to collect data from these sensors in a CSV file. MATLAB is then used to analyze the data and output the hip, knee, ankle, and trunk angles projected on the sagittal plane. An open-source program Processing is then used to generate an animation of the patient’s gait. The accuracy of the sensors was validated through comparison to goniometric measurements (±2° error). The sensor device was also shown to have sufficient sensitivity to observe various gait abnormalities. Several patients used the sensor device, and the data collected from each represented the patient’s movements. Further, the sensors were found to have the ability to observe gait abnormalities caused by the addition of a small amount of weight (4.5 - 9.1 kg) to one side of the patient. The user-friendly interface and portability of the sensor device will help to construct a bridge between patients and their clinicians with fewer necessary inpatient visits.

Keywords: biomedical sensing, gait analysis, outpatient, rehabilitation

Procedia PDF Downloads 291
264 Molecular Migration in Polyvinyl Acetate Matrix: Impact of Compatibility, Number of Migrants and Stress on Surface and Internal Microstructure

Authors: O. Squillace, R. L. Thompson

Abstract:

Migration of small molecules to, and across the surface of polymer matrices is a little-studied problem with important industrial applications. Tackifiers in adhesives, flavors in foods and binding agents in paints all present situations where the function of a product depends on the ability of small molecules to migrate through a polymer matrix to achieve the desired properties such as softness, dispersion of fillers, and to deliver an effect that is felt (or tasted) on a surface. It’s been shown that the chemical and molecular structure, surface free energies, phase behavior, close environment and compatibility of the system, influence the migrants’ motion. When differences in behavior, such as occurrence of segregation to the surface or not, are observed it is then of crucial importance to identify and get a better understanding of the driving forces involved in the process of molecular migration. In this aim, experience is meant to be allied with theory in order to deliver a validated theoretical and computational toolkit to describe and predict these phenomena. The systems that have been chosen for this study aim to address the effect of polarity mismatch between the migrants and the polymer matrix and that of a second migrant over the first one. As a non-polar resin polymer, polyvinyl acetate is used as the material to which more or less polar migrants (sorbitol, carvone, octanoic acid (OA), triacetin) are to be added. Through contact angle measurement a surface excess is seen for sorbitol (polar) mixed with PVAc as the surface energy is lowered compare to the one of pure PVAc. This effect is increased upon the addition of carvon or triacetin (non-polars). Surface micro-structures are also evidenced by atomic force microscopy (AFM). Ion beam analysis (Nuclear Reaction Analysis), supplemented by neutron reflectometry can accurately characterize the self-organization of surfactants, oligomers, aromatic molecules in polymer films in order to relate the macroscopic behavior to the length scales that are amenable to simulation. The nuclear reaction analysis (NRA) data for deuterated OA 20% shows the evidence of a surface excess which is enhanced after annealing. The addition of 10% triacetin, as a second migrant, results in the formation of an underlying layer enriched in triacetin below the surface excess of OA. The results show that molecules in polarity mismatch with the matrix tend to segregate to the surface, and this is favored by the addition of a second migrant of the same polarity than the matrix. As studies have been restricted to materials that are model supported films under static conditions in a first step, it is also wished to address the more challenging conditions of materials under controlled stress or strain. To achieve this, a simple rig and PDMS cell have been designed to stretch the material to a defined strain and to probe these mechanical effects by ion beam analysis and atomic force microscopy. This will make a significant step towards exploring the influence of extensional strain on surface segregation, flavor release in cross-linked rubbers.

Keywords: polymers, surface segregation, thin films, molecular migration

Procedia PDF Downloads 137
263 The Invaluable Contributions of Radiography and Radiotherapy in Modern Medicine

Authors: Sahar Heidary

Abstract:

Radiography and radiotherapy have emerged as crucial pillars of modern medical practice, revolutionizing diagnostics and treatment for a myriad of health conditions. This abstract highlights the pivotal role of radiography and radiotherapy in favor of healthcare and society. Radiography, a non-invasive imaging technique, has significantly advanced medical diagnostics by enabling the visualization of internal structures and abnormalities within the human body. With the advent of digital radiography, clinicians can obtain high-resolution images promptly, leading to faster diagnoses and informed treatment decisions. Radiography plays a pivotal role in detecting fractures, tumors, infections, and various other conditions, allowing for timely interventions and improved patient outcomes. Moreover, its widespread accessibility and cost-effectiveness make it an indispensable tool in healthcare settings worldwide. On the other hand, radiotherapy, a branch of medical science that utilizes high-energy radiation, has become an integral component of cancer treatment and management. By precisely targeting and damaging cancerous cells, radiotherapy offers a potent strategy to control tumor growth and, in many cases, leads to cancer eradication. Additionally, radiotherapy is often used in combination with surgery and chemotherapy, providing a multifaceted approach to combat cancer comprehensively. The continuous advancements in radiotherapy techniques, such as intensity-modulated radiotherapy and stereotactic radiosurgery, have further improved treatment precision while minimizing damage to surrounding healthy tissues. Furthermore, radiography and radiotherapy have demonstrated their worth beyond oncology. Radiography is instrumental in guiding various medical procedures, including catheter placement, joint injections, and dental evaluations, reducing complications and enhancing procedural accuracy. On the other hand, radiotherapy finds applications in non-cancerous conditions like benign tumors, vascular malformations, and certain neurological disorders, offering therapeutic options for patients who may not benefit from traditional surgical interventions. In conclusion, radiography and radiotherapy stand as indispensable tools in modern medicine, driving transformative improvements in patient care and treatment outcomes. Their ability to diagnose, treat, and manage a wide array of medical conditions underscores their favor in medical practice. As technology continues to advance, radiography and radiotherapy will undoubtedly play an ever more significant role in shaping the future of healthcare, ultimately saving lives and enhancing the quality of life for countless individuals worldwide.

Keywords: radiology, radiotherapy, medical imaging, cancer treatment

Procedia PDF Downloads 72
262 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center

Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael

Abstract:

Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.

Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency

Procedia PDF Downloads 38
261 A Diagnostic Accuracy Study: Comparison of Two Different Molecular-Based Tests (Genotype HelicoDR and Seeplex Clar-H. pylori ACE Detection), in the Diagnosis of Helicobacter pylori Infections

Authors: Recep Kesli, Huseyin Bilgin, Yasar Unlu, Gokhan Gungor

Abstract:

Aim: The aim of this study was to compare diagnostic values of two different molecular-based tests (GenoType® HelicoDR ve Seeplex® H. pylori-ClaR- ACE Detection) in detection presence of the H. pylori from gastric biopsy specimens. In addition to this also was aimed to determine resistance ratios of H. pylori strains against to clarytromycine and quinolone isolated from gastric biopsy material cultures by using both the genotypic (GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection) and phenotypic (gradient strip, E-test) methods. Material and methods: A total of 266 patients who admitted to Konya Education and Research Hospital Department of Gastroenterology with dyspeptic complaints, between January 2011-June 2013, were included in the study. Microbiological and histopathological examinations of biopsy specimens taken from antrum and corpus regions were performed. The presence of H. pylori in all the biopsy samples was investigated by five differnt dignostic methods together: culture (C) (Portagerm pylori-PORT PYL, Pylori agar-PYL, GENbox microaer, bioMerieux, France), histology (H) (Giemsa, Hematoxylin and Eosin staining), rapid urease test (RUT) (CLOtest, Cimberly-Clark, USA), and two different molecular tests; GenoType® HelicoDR, Hain, Germany, based on DNA strip assay, and Seeplex ® H. pylori -ClaR- ACE Detection, Seegene, South Korea, based on multiplex PCR. Antimicrobial resistance of H. pylori isolates against clarithromycin and levofloxacin was determined by GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, and gradient strip (E-test, bioMerieux, France) methods. Culture positivity alone or positivities of both histology and RUT together was accepted as the gold standard for H. pylori positivity. Sensitivity and specificity rates of two molecular methods used in the study were calculated by taking the two gold standards previously mentioned. Results: A total of 266 patients between 16-83 years old who 144 (54.1 %) were female, 122 (45.9 %) were male were included in the study. 144 patients were found as culture positive, and 157 were H and RUT were positive together. 179 patients were found as positive with GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection together. Sensitivity and specificity rates of studied five different methods were found as follows: C were 80.9 % and 84.4 %, H + RUT were 88.2 % and 75.4 %, GenoType® HelicoDR were 100 % and 71.3 %, and Seeplex ® H. pylori -ClaR- ACE Detection were, 100 % and 71.3 %. A strong correlation was found between C and H+RUT, C and GenoType® HelicoDR, and C and Seeplex ® H. pylori -ClaR- ACE Detection (r:0.644 and p:0.000, r:0.757 and p:0.000, r:0.757 and p:0.000, respectively). Of all the isolated 144 H. pylori strains 24 (16.6 %) were detected as resistant to claritromycine, and 18 (12.5 %) were levofloxacin. Genotypic claritromycine resistance was detected only in 15 cases with GenoType® HelicoDR, and 6 cases with Seeplex ® H. pylori -ClaR- ACE Detection. Conclusion: In our study, it was concluded that; GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection was found as the most sensitive diagnostic methods when comparing all the investigated other ones (C, H, and RUT).

Keywords: Helicobacter pylori, GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, antimicrobial resistance

Procedia PDF Downloads 170
260 Luminescent Properties of Plastic Scintillator with Large Area Photonic Crystal Prepared by a Combination of Nanoimprint Lithography and Atomic Layer Deposition

Authors: Jinlu Ruan, Liang Chen, Bo Liu, Xiaoping Ouyang, Zhichao Zhu, Zhongbing Zhang, Shiyi He, Mengxuan Xu

Abstract:

Plastic scintillators play an important role in the measurement of a mixed neutron/gamma pulsed radiation, neutron radiography and pulse shape discrimination technology. In some research, these luminescent properties are necessary that photons produced by the interactions between a plastic scintillator and radiations can be detected as much as possible by the photoelectric detectors and more photons can be emitted from the scintillators along a specific direction where detectors are located. Unfortunately, a majority of these photons produced are trapped in the plastic scintillators due to the total internal reflection (TIR), because there is a significant light-trapping effect when the incident angle of internal scintillation light is larger than the critical angle. Some of these photons trapped in the scintillator may be absorbed by the scintillator itself and the others are emitted from the edges of the scintillator. This makes the light extraction of plastic scintillators very low. Moreover, only a small portion of the photons emitted from the scintillator easily can be detected by detectors effectively, because the distribution of the emission directions of this portion of photons exhibits approximate Lambertian angular profile following a cosine emission law. Therefore, enhancing the light extraction efficiency and adjusting the emission angular profile become the keys for improving the number of photons detected by the detectors. In recent years, photonic crystal structures have been covered on inorganic scintillators to enhance the light extraction efficiency and adjust the angular profile of scintillation light successfully. However, that, preparation methods of photonic crystals will deteriorate performance of plastic scintillators and even destroy the plastic scintillators, makes the investigation on preparation methods of photonic crystals for plastic scintillators and luminescent properties of plastic scintillators with photonic crystal structures inadequate. Although we have successfully made photonic crystal structures covered on the surface of plastic scintillators by a modified self-assembly technique and achieved a great enhance of light extraction efficiency without evident angular-dependence for the angular profile of scintillation light, the preparation of photonic crystal structures with large area (the diameter is larger than 6cm) and perfect periodic structure is still difficult. In this paper, large area photonic crystals on the surface of scintillators were prepared by nanoimprint lithography firstly, and then a conformal layer with material of high refractive index on the surface of photonic crystal by atomic layer deposition technique in order to enhance the stability of photonic crystal structures and increase the number of leaky modes for improving the light extraction efficiency. The luminescent properties of the plastic scintillator with photonic crystals prepared by the mentioned method are compared with those of the plastic scintillator without photonic crystal. The results indicate that the number of photons detected by detectors is increased by the enhanced light extraction efficiency and the angular profile of scintillation light exhibits evident angular-dependence for the scintillator with photonic crystals. The mentioned preparation of photonic crystals is beneficial to scintillation detection applications and lays an important technique foundation for the plastic scintillators to meet special requirements under different application backgrounds.

Keywords: angular profile, atomic layer deposition, light extraction efficiency, plastic scintillator, photonic crystal

Procedia PDF Downloads 203
259 Predicting Mortality among Acute Burn Patients Using BOBI Score vs. FLAMES Score

Authors: S. Moustafa El Shanawany, I. Labib Salem, F. Mohamed Magdy Badr El Dine, H. Tag El Deen Abd Allah

Abstract:

Thermal injuries remain a global health problem and a common issue encountered in forensic pathology. They are a devastating cause of morbidity and mortality in children and adults especially in developing countries, causing permanent disfigurement, scarring and grievous hurt. Burns have always been a matter of legal concern in cases of suicidal burns, self-inflicted burns for false accusation and homicidal attempts. Assessment of burn injuries as well as rating permanent disabilities and disfigurement following thermal injuries for the benefit of compensation claims represents a challenging problem. This necessitates the development of reliable scoring systems to yield an expected likelihood of permanent disability or fatal outcome following burn injuries. The study was designed to identify the risk factors of mortality in acute burn patients and to evaluate the applicability of FLAMES (Fatality by Longevity, APACHE II score, Measured Extent of burn, and Sex) and BOBI (Belgian Outcome in Burn Injury) model scores in predicting the outcome. The study was conducted on 100 adult patients with acute burn injuries admitted to the Burn Unit of Alexandria Main University Hospital, Egypt from October 2014 to October 2015. Victims were examined after obtaining informed consent and the data were collected in specially designed sheets including demographic data, burn details and any associated inhalation injury. Each burn patient was assessed using both BOBI and FLAMES scoring systems. The results of the study show the mean age of patients was 35.54±12.32 years. Males outnumbered females (55% and 45%, respectively). Most patients were accidently burnt (95%), whereas suicidal burns accounted for the remaining 5%. Flame burn was recorded in 82% of cases. As well, 8% of patients sustained more than 60% of total burn surface area (TBSA) burns, 19% of patients needed mechanical ventilation, and 19% of burnt patients died either from wound sepsis, multi-organ failure or pulmonary embolism. The mean length of hospital stay was 24.91±25.08 days. The mean BOBI score was 1.07±1.27 and that of the FLAMES score was -4.76±2.92. The FLAMES score demonstrated an area under the receiver operating characteristic (ROC) curve of 0.95 which was significantly higher than that of the BOBI score (0.883). A statistically significant association was revealed between both predictive models and the outcome. The study concluded that both scoring systems were beneficial in predicting mortality in acutely burnt patients. However, the FLAMES score could be applied with a higher level of accuracy.

Keywords: BOBI, burns, FLAMES, scoring systems, outcome

Procedia PDF Downloads 340
258 Geoinformation Technology of Agricultural Monitoring Using Multi-Temporal Satellite Imagery

Authors: Olena Kavats, Dmitry Khramov, Kateryna Sergieieva, Vladimir Vasyliev, Iurii Kavats

Abstract:

Geoinformation technologies of space agromonitoring are a means of operative decision making support in the tasks of managing the agricultural sector of the economy. Existing technologies use satellite images in the optical range of electromagnetic spectrum. Time series of optical images often contain gaps due to the presence of clouds and haze. A geoinformation technology is created. It allows to fill gaps in time series of optical images (Sentinel-2, Landsat-8, PROBA-V, MODIS) with radar survey data (Sentinel-1) and use information about agrometeorological conditions of the growing season for individual monitoring years. The technology allows to perform crop classification and mapping for spring-summer (winter and spring crops) and autumn-winter (winter crops) periods of vegetation, monitoring the dynamics of crop state seasonal changes, crop yield forecasting. Crop classification is based on supervised classification algorithms, takes into account the peculiarities of crop growth at different vegetation stages (dates of sowing, emergence, active vegetation, and harvesting) and agriculture land state characteristics (row spacing, seedling density, etc.). A catalog of samples of the main agricultural crops (Ukraine) is created and crop spectral signatures are calculated with the preliminary removal of row spacing, cloud cover, and cloud shadows in order to construct time series of crop growth characteristics. The obtained data is used in grain crop growth tracking and in timely detection of growth trends deviations from reference samples of a given crop for a selected date. Statistical models of crop yield forecast are created in the forms of linear and nonlinear interconnections between crop yield indicators and crop state characteristics (temperature, precipitation, vegetation indices, etc.). Predicted values of grain crop yield are evaluated with an accuracy up to 95%. The developed technology was used for agricultural areas monitoring in a number of Great Britain and Ukraine regions using EOS Crop Monitoring Platform (https://crop-monitoring.eos.com). The obtained results allow to conclude that joint use of Sentinel-1 and Sentinel-2 images improve separation of winter crops (rapeseed, wheat, barley) in the early stages of vegetation (October-December). It allows to separate successfully the soybean, corn, and sunflower sowing areas that are quite similar in their spectral characteristics.

Keywords: geoinformation technology, crop classification, crop yield prediction, agricultural monitoring, EOS Crop Monitoring Platform

Procedia PDF Downloads 460
257 The Effect of Technology on Skin Development and Progress

Authors: Haidy Weliam Megaly Gouda

Abstract:

Dermatology is often a neglected specialty in low-resource settings despite the high morbidity associated with skin disease. This becomes even more significant when associated with HIV infection, as dermatological conditions are more common and aggressive in HIV-positive patients. African countries have the highest HIV infection rates, and skin conditions are frequently misdiagnosed and mismanaged because of a lack of dermatological training and educational material. The frequent lack of diagnostic tests in the African setting renders basic clinical skills all the more vital. This project aimed to improve the diagnosis and treatment of skin disease in the HIV population in a district hospital in Malawi. A basic dermatological clinical tool was developed and produced in collaboration with local staff and based on available literature and data collected from clinics. The aim was to improve diagnostic accuracy and provide guidance for the treatment of skin disease in HIV-positive patients. A literature search within Embassy, Medline and Google Scholar was performed and supplemented through data obtained from attending 5 Antiretroviral clinics. From the literature, conditions were selected for inclusion in the resource if they were described as specific, more prevalent, or extensive in the HIV population or have more adverse outcomes if they develop in HIV patients. Resource-appropriate treatment options were decided using Malawian Ministry of Health guidelines and textbooks specific to African dermatology. After the collection of data and discussion with local clinical and pharmacy staff, a list of 15 skin conditions was included, and a booklet was created using the simple layout of a picture, a diagnostic description of the disease and treatment options. Clinical photographs were collected from local clinics (with full consent of the patient) or from the book ‘Common Skin Diseases in Africa’ (permission granted if fully acknowledged and used in a not-for-profit capacity). This tool was evaluated by the local staff alongside an educational teaching session on skin disease. This project aimed to reduce uncertainty in diagnosis and provide guidance for appropriate treatment in HIV patients by gathering information into one practical and manageable resource. To further this project, we hope to review the effectiveness of the tool in practice.

Keywords: prevalence and pattern of skin diseases, impact on quality of life, rural Nepal, interventions, quality switched ruby laser, skin color river blindness, clinical signs, circularity index, grey level run length matrix, grey level co-occurrence matrix, local binary pattern, object detection, ring detection, shape identification

Procedia PDF Downloads 68
256 Epidemiological Patterns of Pediatric Fever of Unknown Origin

Authors: Arup Dutta, Badrul Alam, Sayed M. Wazed, Taslima Newaz, Srobonti Dutta

Abstract:

Background: In today's world, with modern science and contemporary technology, a lot of diseases may be quickly identified and ruled out, but children's fever of unknown origin (FUO) still presents diagnostic difficulties in clinical settings. Any fever that reaches 38 °C and lasts for more than seven days without a known cause is now classified as a fever of unknown origin (FUO). Despite tremendous progress in the medical sector, fever of unknown origin, or FOU, persists as a major health issue and a major contributor to morbidity and mortality, particularly in children, and its spectrum is sometimes unpredictable. The etiology is influenced by geographic location, age, socioeconomic level, frequency of antibiotic resistance, and genetic vulnerability. Since there are currently no known diagnostic algorithms, doctors are forced to evaluate each patient one at a time with extreme caution. A persistent fever poses difficulties for both the patient and the doctor. This prospective observational study was carried out in a Bangladeshi tertiary care hospital from June 2018 to May 2019 with the goal of identifying the epidemiological patterns of fever of unknown origin in pediatric patients. Methods: It was a hospital-based prospective observational study carried out on 106 children (between 2 months and 12 years) with prolonged fever of >38.0 °C lasting for more than 7 days without a clear source. Children with additional chronic diseases or known immunodeficiency problems were not allowed. Clinical practices that helped determine the definitive etiology were assessed. Initial testing included a complete blood count, a routine urine examination, PBF, a chest X-ray, CRP measurement, blood cultures, serology, and additional pertinent investigations. The analysis focused mostly on the etiological results. The standard program SPSS 21 was used to analyze all of the study data. Findings: A total of 106 patients identified as having FUO were assessed, with over half (57.5%) being female and the majority (40.6%) falling within the 1 to 3-year age range. The study categorized the etiological outcomes into five groups: infections, malignancies, connective tissue conditions, miscellaneous, and undiagnosed. In the group that was being studied, infections were found to be the main cause in 44.3% of cases. Undiagnosed cases came in at 31.1%, cancers at 10.4%, other causes at 8.5%, and connective tissue disorders at 4.7%. Hepato-splenomegaly was seen in people with enteric fever, malaria, acute lymphoid leukemia, lymphoma, and hepatic abscesses, either by itself or in combination with other conditions. About 53% of people who were not diagnosed also had hepato-splenomegaly at the same time. Conclusion: Infections are the primary cause of PUO (pyrexia of unknown origin) in children, with undiagnosed cases being the second most common cause. An incremental approach is beneficial in the process of diagnosing a condition. Non-invasive examinations are used to diagnose infections and connective tissue disorders, while invasive investigations are used to diagnose cancer and other ailments. According to this study, the prevalence of undiagnosed diseases is still remarkable, so extensive historical analysis and physical examinations are necessary in order to provide a precise diagnosis.

Keywords: children, diagnostic challenges, fever of unknown origin, pediatric fever, undiagnosed diseases

Procedia PDF Downloads 33
255 An Approach to Determine the in Transit Vibration to Fresh Produce Using Long Range Radio (LORA) Wireless Transducers

Authors: Indika Fernando, Jiangang Fei, Roger Stanely, Hossein Enshaei

Abstract:

Ever increasing demand for quality fresh produce by the consumers, had increased the gravity on the post-harvest supply chains in multi-fold in the recent years. Mechanical injury to fresh produce was a critical factor for produce wastage, especially with the expansion of supply chains, physically extending to thousands of miles. The impact of vibration damages in transit was identified as a specific area of focus which results in wastage of significant portion of the fresh produce, at times ranging from 10% to 40% in some countries. Several studies were concentrated on quantifying the impact of vibration to fresh produce, and it was a challenge to collect vibration impact data continuously due to the limitations in battery life or the memory capacity in the devices. Therefore, the study samples were limited to a stretch of the transit passage or a limited time of the journey. This may or may not give an accurate understanding of the vibration impacts encountered throughout the transit passage, which limits the accuracy of the results. Consequently, an approach which can extend the capacity and ability of determining vibration signals in the transit passage would contribute to accurately analyze the vibration damage along the post-harvest supply chain. A mechanism was developed to address this challenge, which is capable of measuring the in transit vibration continuously through the transit passage subject to a minimum acceleration threshold (0.1g). A system, consisting six tri-axel vibration transducers installed in different locations inside the cargo (produce) pallets in the truck, transmits vibration signals through LORA (Long Range Radio) technology to a central device installed inside the container. The central device processes and records the vibration signals transmitted by the portable transducers, along with the GPS location. This method enables to utilize power consumption for the portable transducers to maximize the capability of measuring the vibration impacts in the transit passage extending to days in the distribution process. The trial tests conducted using the approach reveals that it is a reliable method to measure and quantify the in transit vibrations along the supply chain. The GPS capability enables to identify the locations in the supply chain where the significant vibration impacts were encountered. This method contributes to determining the causes, susceptibility and intensity of vibration impact damages to fresh produce in the post-harvest supply chain. Extensively, the approach could be used to determine the vibration impacts not limiting to fresh produce, but for products in supply chains, which may extend from few hours to several days in transit.

Keywords: post-harvest, supply chain, wireless transducers, LORA, fresh produce

Procedia PDF Downloads 270
254 A Clinical Cutoff to Identify Metabolically Unhealthy Obese and Normal-Weight Phenotype in Young Adults

Authors: Lívia Pinheiro Carvalho, Luciana Di Thommazo-Luporini, Rafael Luís Luporini, José Carlos Bonjorno Junior, Renata Pedrolongo Basso Vanelli, Manoel Carneiro de Oliveira Junior, Rodolfo de Paula Vieira, Renata Trimer, Renata G. Mendes, Mylène Aubertin-Leheudre, Audrey Borghi-Silva

Abstract:

Rationale: Cardiorespiratory fitness (CRF) and functional capacity in young obese and normal-weight people are associated with metabolic and cardiovascular diseases and mortality. However, it remains unclear whether their metabolically healthy (MH) or at risk (AR) phenotype influences cardiorespiratory fitness in this vulnerable population such as obese adults but also in normal-weight people. HOMA insulin resistance index (HI) and leptin-adiponectin ratio (LA) are strong markers for characterizing those phenotypes that we hypothesized to be associated with physical fitness. We also hypothesized that an easy and feasible exercise test could identify a subpopulation at risk to develop metabolic and related disorders. Methods: Thirty-nine sedentary men and women (20-45y; 18.530 kg.m-2) underwent a clinical evaluation, including the six-minute step test (ST), a well-validated and reliable test for young people. Body composition assessment was done by a tetrapolar bioimpedance in a fasting state and in the folicular phase for women. A maximal cardiopulmonary exercise testing, as well as the ST, evaluated the oxygen uptake at the peak of the test (VO2peak) by an ergospirometer Oxycon Mobile. Lipids, glucose, insulin were analysed and the ELISA method quantified the serum leptin and adiponectin from blood samples. Volunteers were divided in two groups: AR or MH according to a HI cutoff of 1.95, which was previously determined in the literature. T-test for comparison between groups, Pearson´s test to correlate main variables and ROC analysis for discriminating AR from up-and-down cycles in ST (SC) were applied (p<0.05). Results: Higher LA, fat mass (FM) and lower HDL, SC, leg lean mass (LM) and VO2peak were found in AR than in MH. Significant correlations were found between VO2peak and SC (r= 0.80) as well as between LA and FM (r=0.87), VO2peak (r=-0.73), and SC (r=-0.65). Area under de curve showed moderate accuracy (0.75) of SC <173 to discriminate AR phenotype. Conclusion: Our study found that at risk obese and normal-weight subjects showed an unhealthy metabolism as well as a poor CRF and functional daily activity capacity. Additionally, a simple and less costly functional test associated with above-mentioned aspects is able to identify ‘at risk’ subjects for primary intervention with important clinical and health implications.

Keywords: aerobic capacity, exercise, fitness, metabolism, obesity, 6MST

Procedia PDF Downloads 361
253 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 228
252 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India

Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar

Abstract:

The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.

Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose

Procedia PDF Downloads 262
251 Hypothalamic Para-Ventricular and Supra-Optic Nucleus Histo-Morphological Alterations in the Streptozotocin-Diabetic Gerbils (Gerbillus Gerbillus)

Authors: Soumia Hammadi, Imane Nouacer, Lamine Hamida, Younes A. Hammadi, Rachid Chaibi

Abstract:

Aims and objective: In the present work, we investigate the impact of both acute and chronic diabetes mellitus induced by streptozotocin (STZ) on the hypothalamus of the small gerbil (Gerbillus gerbillus). In this purpose, we aimed to study the histologic structure of the gerbil’s hypothalamic supraoptic (NSO) and paraventricular nucleus (NPV) at two distinct time points: two days and 30 days after diabetes onset. Methods: We conducted our investigation using 19 adult male gerbils weighing 25 to 28 g, divided into three groups as follow: Group I: Control gerbils (n=6) received an intraperitoneal injection of citrate buffer. Group II: STZ-diabetic gerbils (n=8) received a single intraperitoneal injection of STZ at a dose of 165 mg/kg of body weight. Diabetes onset (D0) is considered with the first hyperglycemia level exceeding 2,5 g/L. This group was further divided into two subgroups: Group II-1: Experimental Gerbils, at acute state of diabetes (n=8) sacrificed after 02 days of diabetes onset, Group II-2: Experimental Gerbils at chronic state of diabetes (n=7) sacrificed after 30 days of diabetes onset. Two and 30 days after diabetes onset, gerbils had blood drawn from the retro-orbital sinus into EDTA tubes. After centrifugation at -4°C, plasma was frozen at -80°C for later measurement of Cortisol, ACTH, and insulin. Afterward, animals were decapitated; their brain was removed, weighed, fixed in aqueous bouin, and processed and stained with Toluidine Bleu stain for histo-stereological analysis. A comparison was done with control gerbils treated with citrate buffer. Results: Compared to control gerbils, at 02 Days post diabetes onset, the neuronal somata of the paraventricular (NPV) and supraoptic nuclei (NSO) expressed numerous vacuoles of various sizes, we distinct also a neuronal juxtaposition and several unidentifiable vacuolated profiles were also seen in the neuropile. At the same time, we revealed the presence of à shrunken and condensed nuclei, which seem to touch the parvocellular neurons ( NPV); this leads us to suggest the presence of an apoptotic process in the early stage of diabetes. At 30 days of diabetes mellitus, the NPV manifests a few neurons with a distant appearance, in addition the magnocellular neurons in both NPV and NSO were hypertrophied with a rich euchromatin nucleus, a well-defined nucleolus, and a granular cytoplasm. Despite the neuronal degeneration at this stage, unexpectedly, ACTH registers a continuous significant high level compared to the early stage of diabetes mellitus and to control gerbils. Conclusion: The results suggest that the induction of diabetes mellitus using STZ in the small gerbils lead to alterations in the structure and morphology of the hypothalamus and hyper-secretion of ACTH and cortisol, possibly indicating hyperactivity of the hypothalamo-pituitary adrenal axis (HPA) during both the early and later stages of the disease. The subsequent quantitative evaluation of CRH, immunehistochemical evaluation of apoptosis, and oxidative stress assessment could corroborate our results.

Keywords: diabetes type 1., streptozotocin., small gerbil., hypothalamus., paraventricular nucleus., supraoptic nucleus.

Procedia PDF Downloads 77
250 Creation of a Test Machine for the Scientific Investigation of Chain Shot

Authors: Mark McGuire, Eric Shannon, John Parmigiani

Abstract:

Timber harvesting increasingly involves mechanized equipment. This has increased the efficiency of harvesting, but has also introduced worker-safety concerns. One such concern arises from the use of harvesters. During operation, harvesters subject saw chain to large dynamic mechanical stresses. These stresses can, under certain conditions, cause the saw chain to fracture. The high speed of harvester saw chain can cause the resulting open chain loop to fracture a second time due to the dynamic loads placed upon it as it travels through space. If a second fracture occurs, it can result in a projectile consisting of one-to-several chain links. This projectile is referred to as a chain shot. It has speeds similar to a bullet but typically has greater mass and is a significant safety concern. Numerous examples exist of chain shots penetrating bullet-proof barriers and causing severe injury and death. Improved harvester-cab barriers can help prevent injury however a comprehensive scientific understanding of chain shot is required to consistently reduce or prevent it. Obtaining this understanding requires a test machine with the capability to cause chain shot to occur under carefully controlled conditions and accurately measure the response. Worldwide few such test machine exist. Those that do focus on validating the ability of barriers to withstand a chain shot impact rather than obtaining a scientific understanding of the chain shot event itself. The purpose of this paper is to describe the design, fabrication, and use of a test machine capable of a comprehensive scientific investigation of chain shot. The capabilities of this machine are to test all commercially-available saw chains and bars at chain tensions and speeds meeting and exceeding those typically encountered in harvester use and accurately measure the corresponding key technical parameters. The test machine was constructed inside of a standard shipping container. This provides space for both an operator station and a test chamber. In order to contain the chain shot under any possible test conditions, the test chamber was lined with a base layer of AR500 steel followed by an overlay of HDPE. To accommodate varying bar orientations and fracture-initiation sites, the entire saw chain drive unit and bar mounting system is modular and capable of being located anywhere in the test chamber. The drive unit consists of a high-speed electric motor with a flywheel. Standard Ponsse harvester head components are used to bar mounting and chain tensioning. Chain lubrication is provided by a separate peristaltic pump. Chain fracture is initiated through ISO standard 11837. Measure parameters include shaft speed, motor vibration, bearing temperatures, motor temperature, motor current draw, hydraulic fluid pressure, chain force at fracture, and high-speed camera images. Results show that the machine is capable of consistently causing chain shot. Measurement output shows fracture location and the force associated with fracture as a function of saw chain speed and tension. Use of this machine will result in a scientific understanding of chain shot and consequently improved products and greater harvester operator safety.

Keywords: chain shot, safety, testing, timber harvesters

Procedia PDF Downloads 155
249 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization

Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller

Abstract:

The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.

Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization

Procedia PDF Downloads 41
248 Design of Evaluation for Ehealth Intervention: A Participatory Study in Italy, Israel, Spain and Sweden

Authors: Monika Jurkeviciute, Amia Enam, Johanna Torres Bonilla, Henrik Eriksson

Abstract:

Introduction: Many evaluations of eHealth interventions conclude that the evidence for improved clinical outcomes is limited, especially when the intervention is short, such as one year. Often, evaluation design does not address the feasibility of achieving clinical outcomes. Evaluations are designed to reflect upon clinical goals of intervention without utilizing the opportunity to illuminate effects on organizations and cost. A comprehensive design of evaluation can better support decision-making regarding the effectiveness and potential transferability of eHealth. Hence, the purpose of this paper is to present a feasible and comprehensive design of evaluation for eHealth intervention, including the design process in different contexts. Methodology: The situation of limited feasibility of clinical outcomes was foreseen in the European Union funded project called “DECI” (“Digital Environment for Cognitive Inclusion”) that is run under the “Horizon 2020” program with an aim to define and test a digital environment platform within corresponding care models that help elderly people live independently. A complex intervention of eHealth implementation into elaborate care models in four different countries was planned for one year. To design the evaluation, a participative approach was undertaken using Pettigrew’s lens of change and transformations, including context, process, and content. Through a series of workshops, observations, interviews, and document analysis, as well as a review of scientific literature, a comprehensive design of evaluation was created. Findings: The findings indicate that in order to get evidence on clinical outcomes, eHealth interventions should last longer than one year. The content of the comprehensive evaluation design includes a collection of qualitative and quantitative methods for data gathering which illuminates non-medical aspects. Furthermore, it contains communication arrangements to discuss the results and continuously improve the evaluation design, as well as procedures for monitoring and improving the data collection during the intervention. The process of the comprehensive evaluation design consists of four stages: (1) analysis of a current state in different contexts, including measurement systems, expectations and profiles of stakeholders, organizational ambitions to change due to eHealth integration, and the organizational capacity to collect data for evaluation; (2) workshop with project partners to discuss the as-is situation in relation to the project goals; (3) development of general and customized sets of relevant performance measures, questionnaires and interview questions; (4) setting up procedures and monitoring systems for the interventions. Lastly, strategies are presented on how challenges can be handled during the design process of evaluation in four different countries. The evaluation design needs to consider contextual factors such as project limitations, and differences between pilot sites in terms of eHealth solutions, patient groups, care models, national and organizational cultures and settings. This implies a need for the flexible approach to evaluation design to enable judgment over the effectiveness and potential for adoption and transferability of eHealth. In summary, this paper provides learning opportunities for future evaluation designs of eHealth interventions in different national and organizational settings.

Keywords: ehealth, elderly, evaluation, intervention, multi-cultural

Procedia PDF Downloads 327
247 Acrylate-Based Photopolymer Resin Combined with Acrylated Epoxidized Soybean Oil for 3D-Printing

Authors: Raphael Palucci Rosa, Giuseppe Rosace

Abstract:

Stereolithography (SLA) is one of the 3D-printing technologies that has been steadily growing in popularity for both industrial and personal applications due to its versatility, high accuracy, and low cost. Its printing process consists of using a light emitter to solidify photosensitive liquid resins layer-by-layer to produce solid objects. However, the majority of the resins used in SLA are derived from petroleum and characterized by toxicity, stability, and recalcitrance to degradation in natural environments. Aiming to develop an eco-friendly resin, in this work, different combinations of a standard commercial SLA resin (Peopoly UV professional) with a vegetable-based resin were investigated. To reach this goal, different mass concentrations (varying from 10 to 50 wt%) of acrylated epoxidized soybean oil (AESO), a vegetable resin produced from soyabean oil, were mixed with a commercial acrylate-based resin. 1.0 wt% of Diphenyl(2,4,6-trimethylbenzoyl) phosphine oxide (TPO) was used as photo-initiator, and the samples were printed using a Peopoly moai 130. The machine was set to operate at standard configurations when printing commercial resins. After the print was finished, the excess resin was drained off, and the samples were washed in isopropanol and water to remove any non-reacted resin. Finally, the samples were post-cured for 30 min in a UV chamber. FT-IR analysis was used to confirm the UV polymerization of the formulated resin with different AESO/Peopoly ratios. The signals from 1643.7 to 1616, which corresponds to the C=C stretching of the AESO acrylic acids and Peopoly acrylic groups, significantly decreases after the reaction. The signal decrease indicates the consumption of the double bonds during the radical polymerization. Furthermore, the slight change of the C-O-C signal from 1186.1 to 1159.9 decrease of the signals at 809.5 and 983.1, which corresponds to unsaturated double bonds, are both proofs of the successful polymerization. Mechanical analyses showed a decrease of 50.44% on tensile strength when adding 10 wt% of AESO, but it was still in the same range as other commercial resins. The elongation of break increased by 24% with 10 wt% of AESO and swelling analysis showed that samples with a higher concentration of AESO mixed absorbed less water than their counterparts. Furthermore, high-resolution prototypes were printed using both resins, and visual analysis did not show any significant difference between both products. In conclusion, the AESO resin was successful incorporated into a commercial resin without affecting its printability. The bio-based resin showed lower tensile strength than the Peopoly resin due to network loosening, but it was still in the range of other commercial resins. The hybrid resin also showed better flexibility and water resistance than Peopoly resin without affecting its resolution. Finally, the development of new types of SLA resins is essential to provide new sustainable alternatives to the commercial petroleum-based ones.

Keywords: 3D-printing, bio-based, resin, soybean, stereolithography

Procedia PDF Downloads 130
246 Metacognitive Processing in Early Readers: The Role of Metacognition in Monitoring Linguistic and Non-Linguistic Performance and Regulating Students' Learning

Authors: Ioanna Taouki, Marie Lallier, David Soto

Abstract:

Metacognition refers to the capacity to reflect upon our own cognitive processes. Although there is an ongoing discussion in the literature on the role of metacognition in learning and academic achievement, little is known about its neurodevelopmental trajectories in early childhood, when children begin to receive formal education in reading. Here, we evaluate the metacognitive ability, estimated under a recently developed Signal Detection Theory model, of a cohort of children aged between 6 and 7 (N=60), who performed three two-alternative-forced-choice tasks (two linguistic: lexical decision task, visual attention span task, and one non-linguistic: emotion recognition task) including trial-by-trial confidence judgements. Our study has three aims. First, we investigated how metacognitive ability (i.e., how confidence ratings track accuracy in the task) relates to performance in general standardized tasks related to students' reading and general cognitive abilities using Spearman's and Bayesian correlation analysis. Second, we assessed whether or not young children recruit common mechanisms supporting metacognition across the different task domains or whether there is evidence for domain-specific metacognition at this early stage of development. This was done by examining correlations in metacognitive measures across different task domains and evaluating cross-task covariance by applying a hierarchical Bayesian model. Third, using robust linear regression and Bayesian regression models, we assessed whether metacognitive ability in this early stage is related to the longitudinal learning of children in a linguistic and a non-linguistic task. Notably, we did not observe any association between students’ reading skills and metacognitive processing in this early stage of reading acquisition. Some evidence consistent with domain-general metacognition was found, with significant positive correlations between metacognitive efficiency between lexical and emotion recognition tasks and substantial covariance indicated by the Bayesian model. However, no reliable correlations were found between metacognitive performance in the visual attention span and the remaining tasks. Remarkably, metacognitive ability significantly predicted children's learning in linguistic and non-linguistic domains a year later. These results suggest that metacognitive skill may be dissociated to some extent from general (i.e., language and attention) abilities and further stress the importance of creating educational programs that foster students’ metacognitive ability as a tool for long term learning. More research is crucial to understand whether these programs can enhance metacognitive ability as a transferable skill across distinct domains or whether unique domains should be targeted separately.

Keywords: confidence ratings, development, metacognitive efficiency, reading acquisition

Procedia PDF Downloads 154
245 Development of PCL/Chitosan Core-Shell Electrospun Structures

Authors: Hilal T. Sasmazel, Seda Surucu

Abstract:

Skin tissue engineering is a promising field for the treatment of skin defects using scaffolds. This approach involves the use of living cells and biomaterials to restore, maintain, or regenerate tissues and organs in the body by providing; (i) larger surface area for cell attachment, (ii) proper porosity for cell colonization and cell to cell interaction, and (iii) 3-dimensionality at macroscopic scale. Recent studies on this area mainly focus on fabrication of scaffolds that can closely mimic the natural extracellular matrix (ECM) for creation of tissue specific niche-like environment at the subcellular scale. Scaffolds designed as ECM-like architectures incorporating into the host with minimal scarring/pain and facilitate angiogenesis. This study is related to combining of synthetic PCL and natural chitosan polymers to form 3D PCL/Chitosan core-shell structures for skin tissue engineering applications. Amongst the polymers used in tissue engineering, natural polymer chitosan and synthetic polymer poly(ε-caprolactone) (PCL) are widely preferred in the literature. Chitosan has been among researchers for a very long time because of its superior biocompatibility and structural resemblance to the glycosaminoglycan of bone tissue. However, the low mechanical flexibility and limited biodegradability properties reveals the necessity of using this polymer in a composite structure. On the other hand, PCL is a versatile polymer due to its low melting point (60°C), ease of processability, degradability with non-enzymatic processes (hydrolysis) and good mechanical properties. Nevertheless, there are also several disadvantages of PCL such as its hydrophobic structure, limited bio-interaction and susceptibility to bacterial biodegradation. Therefore, it became crucial to use both of these polymers together as a hybrid material in order to overcome the disadvantages of both polymers and combine advantages of those. The scaffolds here were fabricated by using electrospinning technique and the characterizations of the samples were done by contact angle (CA) measurements, scanning electron microscopy (SEM), transmission electron microscopy (TEM) and X-Ray Photoelectron spectroscopy (XPS). Additionally, gas permeability test, mechanical test, thickness measurement and PBS absorption and shrinkage tests were performed for all type of scaffolds (PCL, chitosan and PCL/chitosan core-shell). By using ImageJ launcher software program (USA) from SEM photographs the average inter-fiber diameter values were calculated as 0.717±0.198 µm for PCL, 0.660±0.070 µm for chitosan and 0.412±0.339 µm for PCL/chitosan core-shell structures. Additionally, the average inter-fiber pore size values exhibited decrease of 66.91% and 61.90% for the PCL and chitosan structures respectively, compare to PCL/chitosan core-shell structures. TEM images proved that homogenous and continuous bead free core-shell fibers were obtained. XPS analysis of the PCL/chitosan core-shell structures exhibited the characteristic peaks of PCL and chitosan polymers. Measured average gas permeability value of produced PCL/chitosan core-shell structure was determined 2315±3.4 g.m-2.day-1. In the future, cell-material interactions of those developed PCL/chitosan core-shell structures will be carried out with L929 ATCC CCL-1 mouse fibroblast cell line. Standard MTT assay and microscopic imaging methods will be used for the investigation of the cell attachment, proliferation and growth capacities of the developed materials.

Keywords: chitosan, coaxial electrospinning, core-shell, PCL, tissue scaffold

Procedia PDF Downloads 487
244 A Quasi-Systematic Review on Effectiveness of Social and Cultural Sustainability Practices in Built Environment

Authors: Asif Ali, Daud Salim Faruquie

Abstract:

With the advancement of knowledge about the utility and impact of sustainability, its feasibility has been explored into different walks of life. Scientists, however; have established their knowledge in four areas viz environmental, economic, social and cultural, popularly termed as four pillars of sustainability. Aspects of environmental and economic sustainability have been rigorously researched and practiced and huge volume of strong evidence of effectiveness has been founded for these two sub-areas. For the social and cultural aspects of sustainability, dependable evidence of effectiveness is still to be instituted as the researchers and practitioners are developing and experimenting methods across the globe. Therefore, the present research aimed to identify globally used practices of social and cultural sustainability and through evidence synthesis assess their outcomes to determine the effectiveness of those practices. A PICO format steered the methodology which included all populations, popular sustainability practices including walkability/cycle tracks, social/recreational spaces, privacy, health & human services and barrier free built environment, comparators included ‘Before’ and ‘After’, ‘With’ and ‘Without’, ‘More’ and ‘Less’ and outcomes included Social well-being, cultural co-existence, quality of life, ethics and morality, social capital, sense of place, education, health, recreation and leisure, and holistic development. Search of literature included major electronic databases, search websites, organizational resources, directory of open access journals and subscribed journals. Grey literature, however, was not included. Inclusion criteria filtered studies on the basis of research designs such as total randomization, quasi-randomization, cluster randomization, observational or single studies and certain types of analysis. Studies with combined outcomes were considered but studies focusing only on environmental and/or economic outcomes were rejected. Data extraction, critical appraisal and evidence synthesis was carried out using customized tabulation, reference manager and CASP tool. Partial meta-analysis was carried out and calculation of pooled effects and forest plotting were done. As many as 13 studies finally included for final synthesis explained the impact of targeted practices on health, behavioural and social dimensions. Objectivity in the measurement of health outcomes facilitated quantitative synthesis of studies which highlighted the impact of sustainability methods on physical activity, Body Mass Index, perinatal outcomes and child health. Studies synthesized qualitatively (and also quantitatively) showed outcomes such as routines, family relations, citizenship, trust in relationships, social inclusion, neighbourhood social capital, wellbeing, habitability and family’s social processes. The synthesized evidence indicates slight effectiveness and efficacy of social and cultural sustainability on the targeted outcomes. Further synthesis revealed that such results of this study are due weak research designs and disintegrated implementations. If architects and other practitioners deliver their interventions in collaboration with research bodies and policy makers, a stronger evidence-base in this area could be generated.

Keywords: built environment, cultural sustainability, social sustainability, sustainable architecture

Procedia PDF Downloads 403
243 Big Data for Local Decision-Making: Indicators Identified at International Conference on Urban Health 2017

Authors: Dana R. Thomson, Catherine Linard, Sabine Vanhuysse, Jessica E. Steele, Michal Shimoni, Jose Siri, Waleska Caiaffa, Megumi Rosenberg, Eleonore Wolff, Tais Grippa, Stefanos Georganos, Helen Elsey

Abstract:

The Sustainable Development Goals (SDGs) and Urban Health Equity Assessment and Response Tool (Urban HEART) identify dozens of key indicators to help local decision-makers prioritize and track inequalities in health outcomes. However, presentations and discussions at the International Conference on Urban Health (ICUH) 2017 suggested that additional indicators are needed to make decisions and policies. A local decision-maker may realize that malaria or road accidents are a top priority. However, s/he needs additional health determinant indicators, for example about standing water or traffic, to address the priority and reduce inequalities. Health determinants reflect the physical and social environments that influence health outcomes often at community- and societal-levels and include such indicators as access to quality health facilities, access to safe parks, traffic density, location of slum areas, air pollution, social exclusion, and social networks. Indicator identification and disaggregation are necessarily constrained by available datasets – typically collected about households and individuals in surveys, censuses, and administrative records. Continued advancements in earth observation, data storage, computing and mobile technologies mean that new sources of health determinants indicators derived from 'big data' are becoming available at fine geographic scale. Big data includes high-resolution satellite imagery and aggregated, anonymized mobile phone data. While big data are themselves not representative of the population (e.g., satellite images depict the physical environment), they can provide information about population density, wealth, mobility, and social environments with tremendous detail and accuracy when combined with population-representative survey, census, administrative and health system data. The aim of this paper is to (1) flag to data scientists important indicators needed by health decision-makers at the city and sub-city scale - ideally free and publicly available, and (2) summarize for local decision-makers new datasets that can be generated from big data, with layperson descriptions of difficulties in generating them. We include SDGs and Urban HEART indicators, as well as indicators mentioned by decision-makers attending ICUH 2017.

Keywords: health determinant, health outcome, mobile phone, remote sensing, satellite imagery, SDG, urban HEART

Procedia PDF Downloads 215
242 Single Stage Holistic Interventions: The Impact on Well-Being

Authors: L. Matthewman, J. Nowlan

Abstract:

Background: Holistic or Integrative Psychology emphasizes the interdependence of physiological, spiritual and psychological dynamics. Studying “wholeness and well-being” from a systems perspective combines innovative psychological science interventions with Eastern orientated healing wisdoms and therapies. The literature surrounding holistic/integrative psychology focuses on multi-stage interventions in attempts to enhance the mind-body experiences of well-being for participants. This study proposes a new single stage model as an intervention for UG/PG students, time-constrained workplace employees and managers/leaders for improved well-being and life enhancement. The main research objective was to investigate participants’ experiences of holistic and mindfulness interventions for impact on emotional well-being. The main research question asked was if single stage holistic interventions could impact on psychological well-being. This is of consequence because many people report that a reason for not taking part in mind-body or wellness programmes is that they believe that they do not have sufficient time to engage in such pursuits. Experimental Approach: The study employed a mixed methods pre-test/post-test research design. Data was analyzed using descriptive statistics and interpretative phenomenological analysis. Purposive sampling methods were employed. An adapted mindfulness measurement questionnaire (MAAS) was administered to 20 volunteer final year UG student participants prior to the single stage intervention and following the intervention. A further post-test longitudinal follow-up took place one week later. Intervention: The single stage model intervention consisted of a half hour session of mindfulness, yoga stretches and head and neck massage in the following sequence: Mindful awareness of the breath, yoga stretches 1, mindfulness of the body, head and neck massage, mindfulness of sounds, yoga stretches 2 and finished with pure awareness mindfulness. Results: The findings on the pre-test indicated key themes concerning: “being largely unaware of feelings”, “overwhelmed with final year exams”, “juggling other priorities” , “not feeling in control”, “stress” and “negative emotional display episodes”. Themes indicated on the post-test included: ‘more aware of self’, ‘in more control’, ‘immediately more alive’ and ‘just happier’ compared to the pre-test. Themes from post-test 2 indicated similar findings to post-test 1 in terms of themes. but on a lesser scale when scored for intensity. Interestingly, the majority of participants reported that they would now seek other similar interventions in the future and would be likely to engage with a multi-stage intervention type on a longer-term basis. Overall, participants reported increased psychological well-being after the single stage intervention. Conclusion: A single stage one-off intervention model can be effective to help towards the wellbeing of final year UG students. There is little indication to suggest that this would not be generalizable to others in different areas of life and business. However this study must be taken with caution due to low participant numbers. Implications: Single stage one-off interventions can be used to enhance peoples’ lives who might not otherwise sign up for a longer multi-stage intervention. In addition, single stage interventions can be utilized to help participants progress onto longer multiple stage interventions. Finally, further research into one stage well-being interventions is encouraged.

Keywords: holistic/integrative psychology, mindfulness, well-being, yoga

Procedia PDF Downloads 354