Search results for: distance measurement error
108 The Gender Criteria of Film Criticism: Creating the ‘Big’, Avoiding the Important
Authors: Eleni Karasavvidou
Abstract:
Social and anthropological research, parallel to Gender Studies, highlighted the relationship between social structures and symbolic forms as an important field of interaction and recording of 'social trends.' Since the study of representations can contribute to the understanding of the social functions and power relations, they encompass. This ‘mirage,’ however, has not only to do with the representations themselves but also with the ways they are received and the film or critical narratives that are established as dominant or alternative. Cinema and the criticism of its cultural products are no exception. Even in the rapidly changing media landscape of the 21st century, movies remain an integral and widespread part of popular culture, making films an extremely powerful means of 'legitimizing' or 'delegitimizing' visions of domination and commonsensical gender stereotypes throughout society. And yet it is film criticism, the 'language per se,' that legitimizes, reinforces, rewards and reproduces (or at least ignores) the stereotypical depictions of female roles that remain common in the realm of film images. This creates the need for this issue to have emerged (also) in academic research questioning gender criteria in film reviews as part of the effort for an inclusive art and society. Qualitative content analysis is used to examine female roles in selected Oscar-nominated films against their reviews from leading websites and newspapers. This method was chosen because of the complex nature of the depictions in the films and the narratives they evoke. The films were divided into basic scenes depicting social functions, such as love and work relationships, positions of power and their function, which were analyzed by content analysis, with borrowings from structuralism (Gennette) and the local/universal images of intercultural philology (Wierlacher). In addition to the measurement of the general ‘representation-time’ by gender, other qualitative characteristics were also analyzed, such as: speaking time, sayings or key actions, overall quality of the character's action in relation to the development of the scenario and social representations in general, as well as quantitatively (insufficient number of female lead roles, fewer key supporting roles, relatively few female directors and people in the production chain and how they might affect screen representations. The quantitative analysis in this study was used to complement the qualitative content analysis. Then the focus shifted to the criteria of film criticism and to the rhetorical narratives that exclude or highlight in relation to gender identities and functions. In the criteria and language of film criticism, stereotypes are often reproduced or allegedly overturned within the framework of apolitical "identity politics," which mainly addresses the surface of a self-referential cultural-consumer product without connecting it more deeply with the material and cultural life. One of the prime examples of this failure is the Bechtel Test, which tracks whether female characters speak in a film regardless of whether women's stories are represented or not in the films analyzed. If perceived unbiased male filmmakers still fail to tell truly feminist stories, the same is the case with the criteria of criticism and the related interventions.Keywords: representations, context analysis, reviews, sexist stereotypes
Procedia PDF Downloads 84107 Comparing Radiographic Detection of Simulated Syndesmosis Instability Using Standard 2D Fluoroscopy Versus 3D Cone-Beam Computed Tomography
Authors: Diane Ghanem, Arjun Gupta, Rohan Vijayan, Ali Uneri, Babar Shafiq
Abstract:
Introduction: Ankle sprains and fractures often result in syndesmosis injuries. Unstable syndesmotic injuries result from relative motion between the distal ends of the tibia and fibula, anatomic juncture which should otherwise be rigid, and warrant operative management. Clinical and radiological evaluations of intraoperative syndesmosis stability remain a challenging task as traditional 2D fluoroscopy is limited to a uniplanar translational displacement. The purpose of this pilot cadaveric study is to compare the 2D fluoroscopy and 3D cone beam computed tomography (CBCT) stress-induced syndesmosis displacements. Methods: Three fresh-frozen lower legs underwent 2D fluoroscopy and 3D CIOS CBCT to measure syndesmosis position before dissection. Syndesmotic injury was simulated by resecting the (1) anterior inferior tibiofibular ligament (AITFL), the (2) posterior inferior tibiofibular ligament (PITFL) and the inferior transverse ligament (ITL) simultaneously, followed by the (3) interosseous membrane (IOM). Manual external rotation and Cotton stress test were performed after each of the three resections and 2D and 3D images were acquired. Relevant 2D and 3D parameters included the tibiofibular overlap (TFO), tibiofibular clear space (TCS), relative rotation of the fibula, and anterior-posterior (AP) and medial-lateral (ML) translations of the fibula relative to the tibia. Parameters were measured by two independent observers. Inter-rater reliability was assessed by intraclass correlation coefficient (ICC) to determine measurement precision. Results: Significant mismatches were found in the trends between the 2D and 3D measurements when assessing for TFO, TCS and AP translation across the different resection states. Using 3D CBCT, TFO was inversely proportional to the number of resected ligaments while TCS was directly proportional to the latter across all cadavers and ‘resection + stress’ states. Using 2D fluoroscopy, this trend was not respected under the Cotton stress test. 3D AP translation did not show a reliable trend whereas 2D AP translation of the fibula was positive under the Cotton stress test and negative under the external rotation. 3D relative rotation of the fibula, assessed using the Tang et al. ratio method and Beisemann et al. angular method, suggested slight overall internal rotation with complete resection of the ligaments, with a change < 2mm - threshold which corresponds to the commonly used buffer to account for physiologic laxity as per clinical judgment of the surgeon. Excellent agreement (>0.90) was found between the two independent observers for each of the parameters in both 2D and 3D (overall ICC 0.9968, 95% CI 0.995 - 0.999). Conclusions: The 3D CIOS CBCT appears to reliably depict the trend in TFO and TCS. This might be due to the additional detection of relevant rotational malpositions of the fibula in comparison to the standard 2D fluoroscopy which is limited to a single plane translation. A better understanding of 3D imaging may help surgeons identify the precise measurements planes needed to achieve better syndesmosis repair.Keywords: 2D fluoroscopy, 3D computed tomography, image processing, syndesmosis injury
Procedia PDF Downloads 70106 A Flexible Piezoelectric - Polymer Composite for Non-Invasive Detection of Multiple Vital Signs of Human
Authors: Sarah Pasala, Elizabeth Zacharias
Abstract:
Vital sign monitoring is crucial for both everyday health and medical diagnosis. A significant factor in assessing a human's health is their vital signs, which include heart rate, breathing rate, blood pressure, and electrocardiogram (ECG) readings. Vital sign monitoring has been the focus of many system and method innovations recently. Piezoelectrics are materials that convert mechanical energy into electrical energy and can be used for vital sign monitoring. Piezoelectric energy harvesters that are stretchable and flexible can detect very low frequencies like airflow, heartbeat, etc. Current advancements in piezoelectric materials and flexible sensors have made it possible to create wearable and implantable medical devices that can continuously monitor physiological signals in humans. But because of their non-biocompatible nature, they also produce a large amount of e-waste and require another surgery to remove the implant. This paper presents a biocompatible and flexible piezoelectric composite material for wearable and implantable devices that offers a high-performance platform for seamless and continuous monitoring of human physiological signals and tactile stimuli. It also addresses the issue of e-waste and secondary surgery. A Lead-free piezoelectric, SrBi4Ti4O15, is found to be suitable for this application because the properties can be tailored by suitable substitutions and also by varying the synthesis temperature protocols. In the present work, SrBi4Ti4O15 modified by rare-earth has been synthesized and studied. Coupling factors are calculated from resonant (fr) and anti-resonant frequencies (fa). It is observed that Samarium substitution in SBT has increased the Curie temperature, dielectric and piezoelectric properties. From impedance spectroscopy studies, relaxation, and non-Debye type behaviour are observed. The composite of bioresorbable poly(l-lactide) and Lead-free rare earth modified Bismuth Layered Ferroelectrics leads to a flexible piezoelectric device for non-invasive measurement of vital signs, such as heart rate, breathing rate, blood pressure, and electrocardiogram (ECG) readings and also artery pulse signals in near-surface arteries. These composites are suitable to detect slight movement of the muscles and joints. This Lead-free rare earth modified Bismuth Layered Ferroelectrics – polymer composite is synthesized using a ball mill and the solid-state double sintering method. XRD studies indicated the two phases in the composite. SEM studies revealed the grain size to be uniform and in the range of 100 nm. The electromechanical coupling factor is improved. The elastic constants are calculated and the mechanical flexibility is found to be improved as compared to the single-phase rare earth modified Bismuth Latered piezoelectric. The results indicate that this composite is suitable for the non-invasive detection of multiple vital signs of humans.Keywords: composites, flexible, non-invasive, piezoelectric
Procedia PDF Downloads 37105 Hydrogen Production Using an Anion-Exchange Membrane Water Electrolyzer: Mathematical and Bond Graph Modeling
Authors: Hugo Daneluzzo, Christelle Rabbat, Alan Jean-Marie
Abstract:
Water electrolysis is one of the most advanced technologies for producing hydrogen and can be easily combined with electricity from different sources. Under the influence of electric current, water molecules can be split into oxygen and hydrogen. The production of hydrogen by water electrolysis favors the integration of renewable energy sources into the energy mix by compensating for their intermittence through the storage of the energy produced when production exceeds demand and its release during off-peak production periods. Among the various electrolysis technologies, anion exchange membrane (AEM) electrolyser cells are emerging as a reliable technology for water electrolysis. Modeling and simulation are effective tools to save time, money, and effort during the optimization of operating conditions and the investigation of the design. The modeling and simulation become even more important when dealing with multiphysics dynamic systems. One of those systems is the AEM electrolysis cell involving complex physico-chemical reactions. Once developed, models may be utilized to comprehend the mechanisms to control and detect flaws in the systems. Several modeling methods have been initiated by scientists. These methods can be separated into two main approaches, namely equation-based modeling and graph-based modeling. The former approach is less user-friendly and difficult to update as it is based on ordinary or partial differential equations to represent the systems. However, the latter approach is more user-friendly and allows a clear representation of physical phenomena. In this case, the system is depicted by connecting subsystems, so-called blocks, through ports based on their physical interactions, hence being suitable for multiphysics systems. Among the graphical modelling methods, the bond graph is receiving increasing attention as being domain-independent and relying on the energy exchange between the components of the system. At present, few studies have investigated the modelling of AEM systems. A mathematical model and a bond graph model were used in previous studies to model the electrolysis cell performance. In this study, experimental data from literature were simulated using OpenModelica using bond graphs and mathematical approaches. The polarization curves at different operating conditions obtained by both approaches were compared with experimental ones. It was stated that both models predicted satisfactorily the polarization curves with error margins lower than 2% for equation-based models and lower than 5% for the bond graph model. The activation polarization of hydrogen evolution reactions (HER) and oxygen evolution reactions (OER) were behind the voltage loss in the AEM electrolyzer, whereas ion conduction through the membrane resulted in the ohmic loss. Therefore, highly active electro-catalysts are required for both HER and OER while high-conductivity AEMs are needed for effectively lowering the ohmic losses. The bond graph simulation of the polarisation curve for operating conditions at various temperatures has illustrated that voltage increases with temperature owing to the technology of the membrane. Simulation of the polarisation curve can be tested virtually, hence resulting in reduced cost and time involved due to experimental testing and improved design optimization. Further improvements can be made by implementing the bond graph model in a real power-to-gas-to-power scenario.Keywords: hydrogen production, anion-exchange membrane, electrolyzer, mathematical modeling, multiphysics modeling
Procedia PDF Downloads 91104 Innovation Eco-Systems and Cities: Sustainable Innovation and Urban Form
Authors: Claudia Trillo
Abstract:
Regional innovation eco-ecosystems are composed of a variety of interconnected urban innovation eco-systems, mutually reinforcing each other and making the whole territorial system successful. Combining principles drawn from the new economic growth theory and from the socio-constructivist approach to the economic growth, with the new geography of innovation emerging from the networked nature of innovation districts, this paper explores the spatial configuration of urban innovation districts, with the aim of unveiling replicable spatial patterns and transferable portfolios of urban policies. While some authors suggest that cities should be considered ideal natural clusters, supporting cross-fertilization and innovation thanks to the physical setting they provide to the construction of collective knowledge, still a considerable distance persists between regional development strategies and urban policies. Moreover, while public and private policies supporting entrepreneurship normally consider innovation as the cornerstone of any action aimed at uplifting the competitiveness and economic success of a certain area, a growing body of literature suggests that innovation is non-neutral, hence, it should be constantly assessed against equity and social inclusion. This paper draws from a robust qualitative empirical dataset gathered through 4-years research conducted in Boston to provide readers with an evidence-based set of recommendations drawn from the lessons learned through the investigation of the chosen innovation districts in the Boston area. The evaluative framework used for assessing the overall performance of the chosen case studies stems from the Habitat III Sustainable Development Goals rationale. The concept of inclusive growth has been considered essential to assess the social innovation domain in each of the chosen cases. The key success factors for the development of the Boston innovation ecosystem can be generalized as follows: 1) a quadruple helix model embedded in the physical structure of the two cities (Boston and Cambridge), in which anchor Higher Education (HE) institutions continuously nurture the Entrepreneurial Environment. 2) an entrepreneurial approach emerging from the local governments, eliciting risk-taking and bottom-up civic participation in tackling key issues in the city. 3) a networking structure of some intermediary actors supporting entrepreneurial collaboration, cross-fertilization and co-creation, which collaborate at multiple-scales thus enabling positive spillovers from the stronger to the weaker contexts. 4) awareness of the socio-economic value of the built environment as enabler of cognitive networks allowing activation of the collective intelligence. 5) creation of civic-led spaces enabling grassroot collaboration and cooperation. Evidence shows that there is not a single magic recipe for the successful implementation of place-based and social innovation-driven strategies. On the contrary, the variety of place-grounded combinations of micro and macro initiatives, embedded in the social and spatial fine grain of places and encompassing a diversity of actors, can create the conditions enabling places to thrive and local economic activities to grow in a sustainable way.Keywords: innovation-driven sustainable Eco-systems , place-based sustainable urban development, sustainable innovation districts, social innovation, urban policie
Procedia PDF Downloads 104103 Single Crystal Growth in Floating-Zone Method and Properties of Spin Ladders: Quantum Magnets
Authors: Rabindranath Bag, Surjeet Singh
Abstract:
Materials in which the electrons are strongly correlated provide some of the most challenging and exciting problems in condensed matter physics today. After the discovery of high critical temperature superconductivity in layered or two-dimensional copper oxides, many physicists got attention in cuprates and it led to an upsurge of interest in the synthesis and physical properties of copper-oxide based material. The quest to understand superconducting mechanism in high-temperature cuprates, drew physicist’s attention to somewhat simpler compounds consisting of spin-chains or one-dimensional lattice of coupled spins. Low-dimensional quantum magnets are of huge contemporary interest in basic sciences as well emerging technologies such as quantum computing and quantum information theory, and heat management in microelectronic devices. Spin ladder is an example of quasi one-dimensional quantum magnets which provides a bridge between one and two dimensional materials. One of the examples of quasi one-dimensional spin-ladder compounds is Sr14Cu24O41, which exhibits a lot of interesting and exciting physical phenomena in low dimensional systems. Very recently, the ladder compound Sr14Cu24O41 was shown to exhibit long-distance quantum entanglement crucial to quantum information theory. Also, it is well known that hole-compensation in this material results in very high (metal-like) anisotropic thermal conductivity at room temperature. These observations suggest that Sr14Cu24O41 is a potential multifunctional material which invites further detailed investigations. To investigate these properties one must needs a large and high quality of single crystal. But these systems are showing incongruently melting behavior, which brings many difficulties to grow a large and quality of single crystals. Hence, we are using TSFZ (Travelling Solvent Floating Zone) method to grow the high quality of single crystals of the low dimensional magnets. Apart from this, it has unique crystal structure (alternating stacks of plane containing edge-sharing CuO2 chains, and the plane containing two-leg Cu2O3 ladder with intermediate Sr layers along the b- axis), which is also incommensurate in nature. It exhibits abundant physical phenomenon such as spin dimerization, crystallization of charge holes and charge density wave. The maximum focus of research so far involved in introducing defects on A-site (Sr). However, apart from the A-site (Sr) doping, there are only few studies in which the B-site (Cu) doping of polycrystalline Sr14Cu24O41 have been discussed and the reason behind this is the possibility of two doping sites for Cu (CuO2 chain and Cu2O3 ladder). Therefore, in our present work, the crystals (pristine and Cu-site doped) were grown by using TSFZ method by tuning the growth parameters. The Laue diffraction images, optical polarized microscopy and Scanning Electron Microscopy (SEM) images confirm the quality of the grown crystals. Here, we report the single crystal growth, magnetic and transport properties of Sr14Cu24O41 and its lightly doped variants (magnetic and non-magnetic) containing less than 1% of Co, Ni, Al and Zn impurities. Since, any real system will have some amount of weak disorder, our studies on these ladder compounds with controlled dilute disorder would be significant in the present context.Keywords: low-dimensional quantum magnets, single crystal, spin-ladder, TSFZ technique
Procedia PDF Downloads 274102 Assessing Measures and Caregiving Experiences of Thai Caregivers of Persons with Dementia
Authors: Piyaorn Wajanatinapart, Diane R. Lauver
Abstract:
The number of persons with dementia (PWD) has increased. Informal caregivers are the major providing care. They can have perceived gains and burdens. Caregivers who reported high in perceived gains may report low in burdens and better health. Gaps of caregiving literature were: no report psychometrics in a few studies and unclear definitions of gains; most studies with no theory-guided and conducting in Western countries; not fully described relationships among caregiving variables: motivations, satisfaction with psychological needs, social support, gains, burdens, and physical and psycho-emotional health. Those gaps were filled by assessing psychometric properties of selected measures, providing clearly definitions of gains, using self-determination theory (SDT) to guide the study, and developing the study in Thailand. The study purposes were to evaluate six measures for internal consistency reliability, content validity, and construct validity. This study also examined relationships of caregiving variables: motivations (controlled and autonomous motivations), satisfaction with psychological needs (autonomy, competency, and relatedness), perceived social support, perceived gains, perceived burdens, and physical and psycho-emotional health. This study was a cross-sectional and correlational descriptive design with two convenience samples. Sample 1 was five Thai experts to assess content validity of measures. Sample 2 was 146 Thai caregivers of PWD to assess construct validity, reliability, and relationships among caregiving variables. Experts rated questionnaires and sent them back via e-mail. Caregivers answered questionnaires at clinics of four Thai hospitals. Data analysis was used descriptive statistics and bivariate and multivariate analyses using the composite indicator structural equation model to control measurement errors. For study results, most caregivers were female (82%), middle age (M =51.1, SD =11.9), and daughters (57%). They provided care for 15 hours/day with 4.6 years. The content validity indices of items and scales were .80 or higher for clarity and relevance. Experts suggested item revisions. Cronbach’s alphas were .63 to .93 of ten subscales of four measures and .26 to .57 of three subscales. The gain scale was acceptable for construct validity. With controlling covariates, controlled motivations, the satisfaction with three subscales of psychological needs, and perceived social support had positive relationships with physical and psycho-emotional health. Both satisfaction with autonomy subscale and perceived social support had negative relationship with perceived burdens. The satisfaction with three subscales of psychological needs had positive relationships among them. Physical and psycho-emotional health subscales had positive relationships with each other. Furthermore, perceived burdens had negative relationships with physical and psycho-emotional health. This study was the first use SDT to describe relationships of caregiving variables in Thailand. Caregivers’ characteristics were consistent with literature. Four measures were valid and reliable except two measures. Breadth knowledge about relationships was provided. Interpretation of study results was cautious because of using same sample to evaluate psychometric properties of measures and relationships of caregiving variables. Researchers could use four measures for further caregiving studies. Using a theory would help describe concepts, propositions, and measures used. Researchers may examine the satisfaction with psychological needs as mediators. Future studies to collect data with caregivers in communities are needed.Keywords: caregivers, caregiving, dementia, measures
Procedia PDF Downloads 308101 Single Pass Design of Genetic Circuits Using Absolute Binding Free Energy Measurements and Dimensionless Analysis
Authors: Iman Farasat, Howard M. Salis
Abstract:
Engineered genetic circuits reprogram cellular behavior to act as living computers with applications in detecting cancer, creating self-controlling artificial tissues, and dynamically regulating metabolic pathways. Phenemenological models are often used to simulate and design genetic circuit behavior towards a desired behavior. While such models assume that each circuit component’s function is modular and independent, even small changes in a circuit (e.g. a new promoter, a change in transcription factor expression level, or even a new media) can have significant effects on the circuit’s function. Here, we use statistical thermodynamics to account for the several factors that control transcriptional regulation in bacteria, and experimentally demonstrate the model’s accuracy across 825 measurements in several genetic contexts and hosts. We then employ our first principles model to design, experimentally construct, and characterize a family of signal amplifying genetic circuits (genetic OpAmps) that expand the dynamic range of cell sensors. To develop these models, we needed a new approach to measuring the in vivo binding free energies of transcription factors (TFs), a key ingredient of statistical thermodynamic models of gene regulation. We developed a new high-throughput assay to measure RNA polymerase and TF binding free energies, requiring the construction and characterization of only a few constructs and data analysis (Figure 1A). We experimentally verified the assay on 6 TetR-homolog repressors and a CRISPR/dCas9 guide RNA. We found that our binding free energy measurements quantitatively explains why changing TF expression levels alters circuit function. Altogether, by combining these measurements with our biophysical model of translation (the RBS Calculator) as well as other measurements (Figure 1B), our model can account for changes in TF binding sites, TF expression levels, circuit copy number, host genome size, and host growth rate (Figure 1C). Model predictions correctly accounted for how these 8 factors control a promoter’s transcription rate (Figure 1D). Using the model, we developed a design framework for engineering multi-promoter genetic circuits that greatly reduces the number of degrees of freedom (8 factors per promoter) to a single dimensionless unit. We propose the Ptashne (Pt) number to encapsulate the 8 co-dependent factors that control transcriptional regulation into a single number. Therefore, a single number controls a promoter’s output rather than these 8 co-dependent factors, and designing a genetic circuit with N promoters requires specification of only N Pt numbers. We demonstrate how to design genetic circuits in Pt number space by constructing and characterizing 15 2-repressor OpAmp circuits that act as signal amplifiers when within an optimal Pt region. We experimentally show that OpAmp circuits using different TFs and TF expression levels will only amplify the dynamic range of input signals when their corresponding Pt numbers are within the optimal region. Thus, the use of the Pt number greatly simplifies the genetic circuit design, particularly important as circuits employ more TFs to perform increasingly complex functions.Keywords: transcription factor, synthetic biology, genetic circuit, biophysical model, binding energy measurement
Procedia PDF Downloads 473100 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection
Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément
Abstract:
The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars
Procedia PDF Downloads 11799 The Role of Serum Fructosamine as a Monitoring Tool in Gestational Diabetes Mellitus Treatment in Vietnam
Authors: Truong H. Le, Ngoc M. To, Quang N. Tran, Luu T. Cao, Chi V. Le
Abstract:
Introduction: In Vietnam, the current monitoring and treatment for ordinary diabetic patient mostly based on glucose monitoring with HbA1c test for every three months (recommended goal is HbA1c < 6.5%~7%). For diabetes in pregnant women or Gestational diabetes mellitus (GDM), glycemic control until the time of delivery is extremly important because it could reduce significantly medical implications for both the mother and the child. Besides, GDM requires continuos glucose monitoring at least every two weeks and therefore an alternative marker of glycemia for short-term control is considering a potential tool for the healthcare providers. There are published studies have indicated that the glycosylated serum protein is a better indicator than glycosylated hemoglobin in GDM monitoring. Based on the actual practice in Vietnam, this study was designed to evaluate the role of serum fructosamine as a monitoring tool in GDM treament and its correlations with fasting blood glucose (G0), 2-hour postprandial glucose (G2) and glycosylated hemoglobin (HbA1c). Methods: A cohort study on pregnant women diagnosed with GDM by the 75-gram oralglucose tolerance test was conducted at Endocrinology Department, Cho Ray hospital, Vietnam from June 2014 to March 2015. Cho Ray hospital is the final destination for GDM patient in the southern of Vietnam, the study population has many sources from other pronvinces and therefore researchers belive that this demographic characteristic can help to provide the study result as a reflection for the whole area. In this study, diabetic patients received a continuos glucose monitoring method which consists of bi-weekly on-site visit every 2 weeks with glycosylated serum protein test, fasting blood glucose test and 2-hour postprandial glucose test; HbA1c test for every 3 months; and nutritious consultance for daily diet program. The subjects still received routine treatment at the hospital, with tight follow-up from their healthcare providers. Researchers recorded bi-weekly health conditions, serum fructosamine level and delivery outcome from the pregnant women, using Stata 13 programme for the analysis. Results: A total of 500 pregnant women was enrolled and follow-up in this study. Serum fructosamine level was found to have a light correlation with G0 ( r=0.3458, p < 0.001) and HbA1c ( r=0.3544, p < 0.001), and moderately correlated with G2 ( r=0.4379, p < 0.001). During study timeline, the delivery outcome of 287 women were recorded with the average age of 38.5 ± 1.5 weeks, 9% of them have macrosomia, 2.8% have premature birth before week 35th and 9.8% have premature birth before week 37th; 64.8% of cesarean section and none of them have perinatal or neonatal mortality. The study provides a reference interval of serum fructosamine for GDM patient was 112.9 ± 20.7 μmol/dL. Conclusion: The present results suggests that serum fructosamine is as effective as HbA1c as a reflection of blood glucose control in GDM patient, with a positive result in delivery outcome (0% perinatal or neonatal mortality). The reference value of serum fructosamine measurement provided a potential monitoring utility in GDM treatment for hospitals in Vietnam. Healthcare providers in Cho Ray hospital is considering to conduct more studies to test this reference as a target value in their GDM treatment and monitoring.Keywords: gestational diabetes mellitus, monitoring tool, serum fructosamine, Vietnam
Procedia PDF Downloads 28098 An Aptasensor Based on Magnetic Relaxation Switch and Controlled Magnetic Separation for the Sensitive Detection of Pseudomonas aeruginosa
Authors: Fei Jia, Xingjian Bai, Xiaowei Zhang, Wenjie Yan, Ruitong Dai, Xingmin Li, Jozef Kokini
Abstract:
Pseudomonas aeruginosa is a Gram-negative, aerobic, opportunistic human pathogen that is present in the soil, water, and food. This microbe has been recognized as a representative food-borne spoilage bacterium that can lead to many types of infections. Considering the casualties and property loss caused by P. aeruginosa, the development of a rapid and reliable technique for the detection of P. aeruginosa is crucial. The whole-cell aptasensor, an emerging biosensor using aptamer as a capture probe to bind to the whole cell, for food-borne pathogens detection has attracted much attention due to its convenience and high sensitivity. Here, a low-field magnetic resonance imaging (LF-MRI) aptasensor for the rapid detection of P. aeruginosa was developed. The basic detection principle of the magnetic relaxation switch (MRSw) nanosensor lies on the ‘T₂-shortening’ effect of magnetic nanoparticles in NMR measurements. Briefly speaking, the transverse relaxation time (T₂) of neighboring water protons get shortened when magnetic nanoparticles are clustered due to the cross-linking upon the recognition and binding of biological targets, or simply when the concentration of the magnetic nanoparticles increased. Such shortening is related to both the state change (aggregation or dissociation) and the concentration change of magnetic nanoparticles and can be detected using NMR relaxometry or MRI scanners. In this work, two different sizes of magnetic nanoparticles, which are 10 nm (MN₁₀) and 400 nm (MN₄₀₀) in diameter, were first immobilized with anti- P. aeruginosa aptamer through 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC)/N-hydroxysuccinimide (NHS) chemistry separately, to capture and enrich the P. aeruginosa cells. When incubating with the target, a ‘sandwich’ (MN₁₀-bacteria-MN₄₀₀) complex are formed driven by the bonding of MN400 with P. aeruginosa through aptamer recognition, as well as the conjugate aggregation of MN₁₀ on the surface of P. aeruginosa. Due to the different magnetic performance of the MN₁₀ and MN₄₀₀ in the magnetic field caused by their different saturation magnetization, the MN₁₀-bacteria-MN₄₀₀ complex, as well as the unreacted MN₄₀₀ in the solution, can be quickly removed by magnetic separation, and as a result, only unreacted MN₁₀ remain in the solution. The remaining MN₁₀, which are superparamagnetic and stable in low field magnetic field, work as a signal readout for T₂ measurement. Under the optimum condition, the LF-MRI platform provides both image analysis and quantitative detection of P. aeruginosa, with the detection limit as low as 100 cfu/mL. The feasibility and specificity of the aptasensor are demonstrated in detecting real food samples and validated by using plate counting methods. Only two steps and less than 2 hours needed for the detection procedure, this robust aptasensor can detect P. aeruginosa with a wide linear range from 3.1 ×10² cfu/mL to 3.1 ×10⁷ cfu/mL, which is superior to conventional plate counting method and other molecular biology testing assay. Moreover, the aptasensor has a potential to detect other bacteria or toxins by changing suitable aptamers. Considering the excellent accuracy, feasibility, and practicality, the whole-cell aptasensor provides a promising platform for a quick, direct and accurate determination of food-borne pathogens at cell-level.Keywords: magnetic resonance imaging, meat spoilage, P. aeruginosa, transverse relaxation time
Procedia PDF Downloads 15297 Molecular Migration in Polyvinyl Acetate Matrix: Impact of Compatibility, Number of Migrants and Stress on Surface and Internal Microstructure
Authors: O. Squillace, R. L. Thompson
Abstract:
Migration of small molecules to, and across the surface of polymer matrices is a little-studied problem with important industrial applications. Tackifiers in adhesives, flavors in foods and binding agents in paints all present situations where the function of a product depends on the ability of small molecules to migrate through a polymer matrix to achieve the desired properties such as softness, dispersion of fillers, and to deliver an effect that is felt (or tasted) on a surface. It’s been shown that the chemical and molecular structure, surface free energies, phase behavior, close environment and compatibility of the system, influence the migrants’ motion. When differences in behavior, such as occurrence of segregation to the surface or not, are observed it is then of crucial importance to identify and get a better understanding of the driving forces involved in the process of molecular migration. In this aim, experience is meant to be allied with theory in order to deliver a validated theoretical and computational toolkit to describe and predict these phenomena. The systems that have been chosen for this study aim to address the effect of polarity mismatch between the migrants and the polymer matrix and that of a second migrant over the first one. As a non-polar resin polymer, polyvinyl acetate is used as the material to which more or less polar migrants (sorbitol, carvone, octanoic acid (OA), triacetin) are to be added. Through contact angle measurement a surface excess is seen for sorbitol (polar) mixed with PVAc as the surface energy is lowered compare to the one of pure PVAc. This effect is increased upon the addition of carvon or triacetin (non-polars). Surface micro-structures are also evidenced by atomic force microscopy (AFM). Ion beam analysis (Nuclear Reaction Analysis), supplemented by neutron reflectometry can accurately characterize the self-organization of surfactants, oligomers, aromatic molecules in polymer films in order to relate the macroscopic behavior to the length scales that are amenable to simulation. The nuclear reaction analysis (NRA) data for deuterated OA 20% shows the evidence of a surface excess which is enhanced after annealing. The addition of 10% triacetin, as a second migrant, results in the formation of an underlying layer enriched in triacetin below the surface excess of OA. The results show that molecules in polarity mismatch with the matrix tend to segregate to the surface, and this is favored by the addition of a second migrant of the same polarity than the matrix. As studies have been restricted to materials that are model supported films under static conditions in a first step, it is also wished to address the more challenging conditions of materials under controlled stress or strain. To achieve this, a simple rig and PDMS cell have been designed to stretch the material to a defined strain and to probe these mechanical effects by ion beam analysis and atomic force microscopy. This will make a significant step towards exploring the influence of extensional strain on surface segregation, flavor release in cross-linked rubbers.Keywords: polymers, surface segregation, thin films, molecular migration
Procedia PDF Downloads 13296 Numerical Prediction of Width Crack of Concrete Dapped-End Beams
Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo
Abstract:
Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis
Procedia PDF Downloads 16795 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence
Authors: Gus Calderon, Richard McCreight, Tammy Schwartz
Abstract:
Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.
Procedia PDF Downloads 10894 A Microwave Heating Model for Endothermic Reaction in the Cement Industry
Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira
Abstract:
Microwave technology has been gaining importance in contributing to decarbonization processes in high energy demand industries. Despite the several numerical models presented in the literature, a proper Verification and Validation exercise is still lacking. This is important and required to evaluate the physical process model accuracy and adequacy. Another issue addresses impedance matching, which is an important mechanism used in microwave experiments to increase electromagnetic efficiency. Such mechanism is not available in current computational tools, thus requiring an external numerical procedure. A numerical model was implemented to study the continuous processing of limestone with microwave heating. This process requires the material to be heated until a certain temperature that will prompt a highly endothermic reaction. Both a 2D and 3D model were built in COMSOL Multiphysics to solve the two-way coupling between Maxwell and Energy equations, along with the coupling between both heat transfer phenomena and limestone endothermic reaction. The 2D model was used to study and evaluate the required numerical procedure, being also a benchmark test, allowing other authors to implement impedance matching procedures. To achieve this goal, a controller built in MATLAB was used to continuously matching the cavity impedance and predicting the required energy for the system, thus successfully avoiding energy inefficiencies. The 3D model reproduces realistic results and therefore supports the main conclusions of this work. Limestone was modeled as a continuous flow under the transport of concentrated species, whose material and kinetics properties were taken from literature. Verification and Validation of the coupled model was taken separately from the chemical kinetic model. The chemical kinetic model was found to correctly describe the chosen kinetic equation by comparing numerical results with experimental data. A solution verification was made for the electromagnetic interface, where second order and fourth order accurate schemes were found for linear and quadratic elements, respectively, with numerical uncertainty lower than 0.03%. Regarding the coupled model, it was demonstrated that the numerical error would diverge for the heat transfer interface with the mapped mesh. Results showed numerical stability for the triangular mesh, and the numerical uncertainty was less than 0.1%. This study evaluated limestone velocity, heat transfer, and load influence on thermal decomposition and overall process efficiency. The velocity and heat transfer coefficient were studied with the 2D model, while different loads of material were studied with the 3D model. Both models demonstrated to be highly unstable when solving non-linear temperature distributions. High velocity flows exhibited propensity to thermal runways, and the thermal efficiency showed the tendency to stabilize for the higher velocities and higher filling ratio. Microwave efficiency denoted an optimal velocity for each heat transfer coefficient, pointing out that electromagnetic efficiency is a consequence of energy distribution uniformity. The 3D results indicated the inefficient development of the electric field for low filling ratios. Thermal efficiencies higher than 90% were found for the higher loads and microwave efficiencies up to 75% were accomplished. The 80% fill ratio was demonstrated to be the optimal load with an associated global efficiency of 70%.Keywords: multiphysics modeling, microwave heating, verification and validation, endothermic reactions modeling, impedance matching, limestone continuous processing
Procedia PDF Downloads 14093 Luminescent Properties of Plastic Scintillator with Large Area Photonic Crystal Prepared by a Combination of Nanoimprint Lithography and Atomic Layer Deposition
Authors: Jinlu Ruan, Liang Chen, Bo Liu, Xiaoping Ouyang, Zhichao Zhu, Zhongbing Zhang, Shiyi He, Mengxuan Xu
Abstract:
Plastic scintillators play an important role in the measurement of a mixed neutron/gamma pulsed radiation, neutron radiography and pulse shape discrimination technology. In some research, these luminescent properties are necessary that photons produced by the interactions between a plastic scintillator and radiations can be detected as much as possible by the photoelectric detectors and more photons can be emitted from the scintillators along a specific direction where detectors are located. Unfortunately, a majority of these photons produced are trapped in the plastic scintillators due to the total internal reflection (TIR), because there is a significant light-trapping effect when the incident angle of internal scintillation light is larger than the critical angle. Some of these photons trapped in the scintillator may be absorbed by the scintillator itself and the others are emitted from the edges of the scintillator. This makes the light extraction of plastic scintillators very low. Moreover, only a small portion of the photons emitted from the scintillator easily can be detected by detectors effectively, because the distribution of the emission directions of this portion of photons exhibits approximate Lambertian angular profile following a cosine emission law. Therefore, enhancing the light extraction efficiency and adjusting the emission angular profile become the keys for improving the number of photons detected by the detectors. In recent years, photonic crystal structures have been covered on inorganic scintillators to enhance the light extraction efficiency and adjust the angular profile of scintillation light successfully. However, that, preparation methods of photonic crystals will deteriorate performance of plastic scintillators and even destroy the plastic scintillators, makes the investigation on preparation methods of photonic crystals for plastic scintillators and luminescent properties of plastic scintillators with photonic crystal structures inadequate. Although we have successfully made photonic crystal structures covered on the surface of plastic scintillators by a modified self-assembly technique and achieved a great enhance of light extraction efficiency without evident angular-dependence for the angular profile of scintillation light, the preparation of photonic crystal structures with large area (the diameter is larger than 6cm) and perfect periodic structure is still difficult. In this paper, large area photonic crystals on the surface of scintillators were prepared by nanoimprint lithography firstly, and then a conformal layer with material of high refractive index on the surface of photonic crystal by atomic layer deposition technique in order to enhance the stability of photonic crystal structures and increase the number of leaky modes for improving the light extraction efficiency. The luminescent properties of the plastic scintillator with photonic crystals prepared by the mentioned method are compared with those of the plastic scintillator without photonic crystal. The results indicate that the number of photons detected by detectors is increased by the enhanced light extraction efficiency and the angular profile of scintillation light exhibits evident angular-dependence for the scintillator with photonic crystals. The mentioned preparation of photonic crystals is beneficial to scintillation detection applications and lays an important technique foundation for the plastic scintillators to meet special requirements under different application backgrounds.Keywords: angular profile, atomic layer deposition, light extraction efficiency, plastic scintillator, photonic crystal
Procedia PDF Downloads 20092 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing
Authors: Tolulope Aremu
Abstract:
The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods
Procedia PDF Downloads 2091 Epidemiological Patterns of Pediatric Fever of Unknown Origin
Authors: Arup Dutta, Badrul Alam, Sayed M. Wazed, Taslima Newaz, Srobonti Dutta
Abstract:
Background: In today's world, with modern science and contemporary technology, a lot of diseases may be quickly identified and ruled out, but children's fever of unknown origin (FUO) still presents diagnostic difficulties in clinical settings. Any fever that reaches 38 °C and lasts for more than seven days without a known cause is now classified as a fever of unknown origin (FUO). Despite tremendous progress in the medical sector, fever of unknown origin, or FOU, persists as a major health issue and a major contributor to morbidity and mortality, particularly in children, and its spectrum is sometimes unpredictable. The etiology is influenced by geographic location, age, socioeconomic level, frequency of antibiotic resistance, and genetic vulnerability. Since there are currently no known diagnostic algorithms, doctors are forced to evaluate each patient one at a time with extreme caution. A persistent fever poses difficulties for both the patient and the doctor. This prospective observational study was carried out in a Bangladeshi tertiary care hospital from June 2018 to May 2019 with the goal of identifying the epidemiological patterns of fever of unknown origin in pediatric patients. Methods: It was a hospital-based prospective observational study carried out on 106 children (between 2 months and 12 years) with prolonged fever of >38.0 °C lasting for more than 7 days without a clear source. Children with additional chronic diseases or known immunodeficiency problems were not allowed. Clinical practices that helped determine the definitive etiology were assessed. Initial testing included a complete blood count, a routine urine examination, PBF, a chest X-ray, CRP measurement, blood cultures, serology, and additional pertinent investigations. The analysis focused mostly on the etiological results. The standard program SPSS 21 was used to analyze all of the study data. Findings: A total of 106 patients identified as having FUO were assessed, with over half (57.5%) being female and the majority (40.6%) falling within the 1 to 3-year age range. The study categorized the etiological outcomes into five groups: infections, malignancies, connective tissue conditions, miscellaneous, and undiagnosed. In the group that was being studied, infections were found to be the main cause in 44.3% of cases. Undiagnosed cases came in at 31.1%, cancers at 10.4%, other causes at 8.5%, and connective tissue disorders at 4.7%. Hepato-splenomegaly was seen in people with enteric fever, malaria, acute lymphoid leukemia, lymphoma, and hepatic abscesses, either by itself or in combination with other conditions. About 53% of people who were not diagnosed also had hepato-splenomegaly at the same time. Conclusion: Infections are the primary cause of PUO (pyrexia of unknown origin) in children, with undiagnosed cases being the second most common cause. An incremental approach is beneficial in the process of diagnosing a condition. Non-invasive examinations are used to diagnose infections and connective tissue disorders, while invasive investigations are used to diagnose cancer and other ailments. According to this study, the prevalence of undiagnosed diseases is still remarkable, so extensive historical analysis and physical examinations are necessary in order to provide a precise diagnosis.Keywords: children, diagnostic challenges, fever of unknown origin, pediatric fever, undiagnosed diseases
Procedia PDF Downloads 2790 Quantitative Texture Analysis of Shoulder Sonography for Rotator Cuff Lesion Classification
Authors: Chung-Ming Lo, Chung-Chien Lee
Abstract:
In many countries, the lifetime prevalence of shoulder pain is up to 70%. In America, the health care system spends 7 billion per year about the healthy issues of shoulder pain. With respect to the origin, up to 70% of shoulder pain is attributed to rotator cuff lesions This study proposed a computer-aided diagnosis (CAD) system to assist radiologists classifying rotator cuff lesions with less operator dependence. Quantitative features were extracted from the shoulder ultrasound images acquired using an ALOKA alpha-6 US scanner (Hitachi-Aloka Medical, Tokyo, Japan) with linear array probe (scan width: 36mm) ranging from 5 to 13 MHz. During examination, the postures of the examined patients are standard sitting position and are followed by the regular routine. After acquisition, the shoulder US images were drawn out from the scanner and stored as 8-bit images with pixel value ranging from 0 to 255. Upon the sonographic appearance, the boundary of each lesion was delineated by a physician to indicate the specific pattern for analysis. The three lesion categories for classification were composed of 20 cases of tendon inflammation, 18 cases of calcific tendonitis, and 18 cases of supraspinatus tear. For each lesion, second-order statistics were quantified in the feature extraction. The second-order statistics were the texture features describing the correlations between adjacent pixels in a lesion. Because echogenicity patterns were expressed via grey-scale. The grey-scale co-occurrence matrixes with four angles of adjacent pixels were used. The texture metrics included the mean and standard deviation of energy, entropy, correlation, inverse different moment, inertia, cluster shade, cluster prominence, and Haralick correlation. Then, the quantitative features were combined in a multinomial logistic regression classifier to generate a prediction model of rotator cuff lesions. Multinomial logistic regression classifier is widely used in the classification of more than two categories such as the three lesion types used in this study. In the classifier, backward elimination was used to select a feature subset which is the most relevant. They were selected from the trained classifier with the lowest error rate. Leave-one-out cross-validation was used to evaluate the performance of the classifier. Each case was left out of the total cases and used to test the trained result by the remaining cases. According to the physician’s assessment, the performance of the proposed CAD system was shown by the accuracy. As a result, the proposed system achieved an accuracy of 86%. A CAD system based on the statistical texture features to interpret echogenicity values in shoulder musculoskeletal ultrasound was established to generate a prediction model for rotator cuff lesions. Clinically, it is difficult to distinguish some kinds of rotator cuff lesions, especially partial-thickness tear of rotator cuff. The shoulder orthopaedic surgeon and musculoskeletal radiologist reported greater diagnostic test accuracy than general radiologist or ultrasonographers based on the available literature. Consequently, the proposed CAD system which was developed according to the experiment of the shoulder orthopaedic surgeon can provide reliable suggestions to general radiologists or ultrasonographers. More quantitative features related to the specific patterns of different lesion types would be investigated in the further study to improve the prediction.Keywords: shoulder ultrasound, rotator cuff lesions, texture, computer-aided diagnosis
Procedia PDF Downloads 28489 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data
Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine
Procedia PDF Downloads 24088 Hypothalamic Para-Ventricular and Supra-Optic Nucleus Histo-Morphological Alterations in the Streptozotocin-Diabetic Gerbils (Gerbillus Gerbillus)
Authors: Soumia Hammadi, Imane Nouacer, Lamine Hamida, Younes A. Hammadi, Rachid Chaibi
Abstract:
Aims and objective: In the present work, we investigate the impact of both acute and chronic diabetes mellitus induced by streptozotocin (STZ) on the hypothalamus of the small gerbil (Gerbillus gerbillus). In this purpose, we aimed to study the histologic structure of the gerbil’s hypothalamic supraoptic (NSO) and paraventricular nucleus (NPV) at two distinct time points: two days and 30 days after diabetes onset. Methods: We conducted our investigation using 19 adult male gerbils weighing 25 to 28 g, divided into three groups as follow: Group I: Control gerbils (n=6) received an intraperitoneal injection of citrate buffer. Group II: STZ-diabetic gerbils (n=8) received a single intraperitoneal injection of STZ at a dose of 165 mg/kg of body weight. Diabetes onset (D0) is considered with the first hyperglycemia level exceeding 2,5 g/L. This group was further divided into two subgroups: Group II-1: Experimental Gerbils, at acute state of diabetes (n=8) sacrificed after 02 days of diabetes onset, Group II-2: Experimental Gerbils at chronic state of diabetes (n=7) sacrificed after 30 days of diabetes onset. Two and 30 days after diabetes onset, gerbils had blood drawn from the retro-orbital sinus into EDTA tubes. After centrifugation at -4°C, plasma was frozen at -80°C for later measurement of Cortisol, ACTH, and insulin. Afterward, animals were decapitated; their brain was removed, weighed, fixed in aqueous bouin, and processed and stained with Toluidine Bleu stain for histo-stereological analysis. A comparison was done with control gerbils treated with citrate buffer. Results: Compared to control gerbils, at 02 Days post diabetes onset, the neuronal somata of the paraventricular (NPV) and supraoptic nuclei (NSO) expressed numerous vacuoles of various sizes, we distinct also a neuronal juxtaposition and several unidentifiable vacuolated profiles were also seen in the neuropile. At the same time, we revealed the presence of à shrunken and condensed nuclei, which seem to touch the parvocellular neurons ( NPV); this leads us to suggest the presence of an apoptotic process in the early stage of diabetes. At 30 days of diabetes mellitus, the NPV manifests a few neurons with a distant appearance, in addition the magnocellular neurons in both NPV and NSO were hypertrophied with a rich euchromatin nucleus, a well-defined nucleolus, and a granular cytoplasm. Despite the neuronal degeneration at this stage, unexpectedly, ACTH registers a continuous significant high level compared to the early stage of diabetes mellitus and to control gerbils. Conclusion: The results suggest that the induction of diabetes mellitus using STZ in the small gerbils lead to alterations in the structure and morphology of the hypothalamus and hyper-secretion of ACTH and cortisol, possibly indicating hyperactivity of the hypothalamo-pituitary adrenal axis (HPA) during both the early and later stages of the disease. The subsequent quantitative evaluation of CRH, immunehistochemical evaluation of apoptosis, and oxidative stress assessment could corroborate our results.Keywords: diabetes type 1., streptozotocin., small gerbil., hypothalamus., paraventricular nucleus., supraoptic nucleus.
Procedia PDF Downloads 7487 Creation of a Test Machine for the Scientific Investigation of Chain Shot
Authors: Mark McGuire, Eric Shannon, John Parmigiani
Abstract:
Timber harvesting increasingly involves mechanized equipment. This has increased the efficiency of harvesting, but has also introduced worker-safety concerns. One such concern arises from the use of harvesters. During operation, harvesters subject saw chain to large dynamic mechanical stresses. These stresses can, under certain conditions, cause the saw chain to fracture. The high speed of harvester saw chain can cause the resulting open chain loop to fracture a second time due to the dynamic loads placed upon it as it travels through space. If a second fracture occurs, it can result in a projectile consisting of one-to-several chain links. This projectile is referred to as a chain shot. It has speeds similar to a bullet but typically has greater mass and is a significant safety concern. Numerous examples exist of chain shots penetrating bullet-proof barriers and causing severe injury and death. Improved harvester-cab barriers can help prevent injury however a comprehensive scientific understanding of chain shot is required to consistently reduce or prevent it. Obtaining this understanding requires a test machine with the capability to cause chain shot to occur under carefully controlled conditions and accurately measure the response. Worldwide few such test machine exist. Those that do focus on validating the ability of barriers to withstand a chain shot impact rather than obtaining a scientific understanding of the chain shot event itself. The purpose of this paper is to describe the design, fabrication, and use of a test machine capable of a comprehensive scientific investigation of chain shot. The capabilities of this machine are to test all commercially-available saw chains and bars at chain tensions and speeds meeting and exceeding those typically encountered in harvester use and accurately measure the corresponding key technical parameters. The test machine was constructed inside of a standard shipping container. This provides space for both an operator station and a test chamber. In order to contain the chain shot under any possible test conditions, the test chamber was lined with a base layer of AR500 steel followed by an overlay of HDPE. To accommodate varying bar orientations and fracture-initiation sites, the entire saw chain drive unit and bar mounting system is modular and capable of being located anywhere in the test chamber. The drive unit consists of a high-speed electric motor with a flywheel. Standard Ponsse harvester head components are used to bar mounting and chain tensioning. Chain lubrication is provided by a separate peristaltic pump. Chain fracture is initiated through ISO standard 11837. Measure parameters include shaft speed, motor vibration, bearing temperatures, motor temperature, motor current draw, hydraulic fluid pressure, chain force at fracture, and high-speed camera images. Results show that the machine is capable of consistently causing chain shot. Measurement output shows fracture location and the force associated with fracture as a function of saw chain speed and tension. Use of this machine will result in a scientific understanding of chain shot and consequently improved products and greater harvester operator safety.Keywords: chain shot, safety, testing, timber harvesters
Procedia PDF Downloads 15286 Design of Evaluation for Ehealth Intervention: A Participatory Study in Italy, Israel, Spain and Sweden
Authors: Monika Jurkeviciute, Amia Enam, Johanna Torres Bonilla, Henrik Eriksson
Abstract:
Introduction: Many evaluations of eHealth interventions conclude that the evidence for improved clinical outcomes is limited, especially when the intervention is short, such as one year. Often, evaluation design does not address the feasibility of achieving clinical outcomes. Evaluations are designed to reflect upon clinical goals of intervention without utilizing the opportunity to illuminate effects on organizations and cost. A comprehensive design of evaluation can better support decision-making regarding the effectiveness and potential transferability of eHealth. Hence, the purpose of this paper is to present a feasible and comprehensive design of evaluation for eHealth intervention, including the design process in different contexts. Methodology: The situation of limited feasibility of clinical outcomes was foreseen in the European Union funded project called “DECI” (“Digital Environment for Cognitive Inclusion”) that is run under the “Horizon 2020” program with an aim to define and test a digital environment platform within corresponding care models that help elderly people live independently. A complex intervention of eHealth implementation into elaborate care models in four different countries was planned for one year. To design the evaluation, a participative approach was undertaken using Pettigrew’s lens of change and transformations, including context, process, and content. Through a series of workshops, observations, interviews, and document analysis, as well as a review of scientific literature, a comprehensive design of evaluation was created. Findings: The findings indicate that in order to get evidence on clinical outcomes, eHealth interventions should last longer than one year. The content of the comprehensive evaluation design includes a collection of qualitative and quantitative methods for data gathering which illuminates non-medical aspects. Furthermore, it contains communication arrangements to discuss the results and continuously improve the evaluation design, as well as procedures for monitoring and improving the data collection during the intervention. The process of the comprehensive evaluation design consists of four stages: (1) analysis of a current state in different contexts, including measurement systems, expectations and profiles of stakeholders, organizational ambitions to change due to eHealth integration, and the organizational capacity to collect data for evaluation; (2) workshop with project partners to discuss the as-is situation in relation to the project goals; (3) development of general and customized sets of relevant performance measures, questionnaires and interview questions; (4) setting up procedures and monitoring systems for the interventions. Lastly, strategies are presented on how challenges can be handled during the design process of evaluation in four different countries. The evaluation design needs to consider contextual factors such as project limitations, and differences between pilot sites in terms of eHealth solutions, patient groups, care models, national and organizational cultures and settings. This implies a need for the flexible approach to evaluation design to enable judgment over the effectiveness and potential for adoption and transferability of eHealth. In summary, this paper provides learning opportunities for future evaluation designs of eHealth interventions in different national and organizational settings.Keywords: ehealth, elderly, evaluation, intervention, multi-cultural
Procedia PDF Downloads 32485 Development of PCL/Chitosan Core-Shell Electrospun Structures
Authors: Hilal T. Sasmazel, Seda Surucu
Abstract:
Skin tissue engineering is a promising field for the treatment of skin defects using scaffolds. This approach involves the use of living cells and biomaterials to restore, maintain, or regenerate tissues and organs in the body by providing; (i) larger surface area for cell attachment, (ii) proper porosity for cell colonization and cell to cell interaction, and (iii) 3-dimensionality at macroscopic scale. Recent studies on this area mainly focus on fabrication of scaffolds that can closely mimic the natural extracellular matrix (ECM) for creation of tissue specific niche-like environment at the subcellular scale. Scaffolds designed as ECM-like architectures incorporating into the host with minimal scarring/pain and facilitate angiogenesis. This study is related to combining of synthetic PCL and natural chitosan polymers to form 3D PCL/Chitosan core-shell structures for skin tissue engineering applications. Amongst the polymers used in tissue engineering, natural polymer chitosan and synthetic polymer poly(ε-caprolactone) (PCL) are widely preferred in the literature. Chitosan has been among researchers for a very long time because of its superior biocompatibility and structural resemblance to the glycosaminoglycan of bone tissue. However, the low mechanical flexibility and limited biodegradability properties reveals the necessity of using this polymer in a composite structure. On the other hand, PCL is a versatile polymer due to its low melting point (60°C), ease of processability, degradability with non-enzymatic processes (hydrolysis) and good mechanical properties. Nevertheless, there are also several disadvantages of PCL such as its hydrophobic structure, limited bio-interaction and susceptibility to bacterial biodegradation. Therefore, it became crucial to use both of these polymers together as a hybrid material in order to overcome the disadvantages of both polymers and combine advantages of those. The scaffolds here were fabricated by using electrospinning technique and the characterizations of the samples were done by contact angle (CA) measurements, scanning electron microscopy (SEM), transmission electron microscopy (TEM) and X-Ray Photoelectron spectroscopy (XPS). Additionally, gas permeability test, mechanical test, thickness measurement and PBS absorption and shrinkage tests were performed for all type of scaffolds (PCL, chitosan and PCL/chitosan core-shell). By using ImageJ launcher software program (USA) from SEM photographs the average inter-fiber diameter values were calculated as 0.717±0.198 µm for PCL, 0.660±0.070 µm for chitosan and 0.412±0.339 µm for PCL/chitosan core-shell structures. Additionally, the average inter-fiber pore size values exhibited decrease of 66.91% and 61.90% for the PCL and chitosan structures respectively, compare to PCL/chitosan core-shell structures. TEM images proved that homogenous and continuous bead free core-shell fibers were obtained. XPS analysis of the PCL/chitosan core-shell structures exhibited the characteristic peaks of PCL and chitosan polymers. Measured average gas permeability value of produced PCL/chitosan core-shell structure was determined 2315±3.4 g.m-2.day-1. In the future, cell-material interactions of those developed PCL/chitosan core-shell structures will be carried out with L929 ATCC CCL-1 mouse fibroblast cell line. Standard MTT assay and microscopic imaging methods will be used for the investigation of the cell attachment, proliferation and growth capacities of the developed materials.Keywords: chitosan, coaxial electrospinning, core-shell, PCL, tissue scaffold
Procedia PDF Downloads 48184 A Quasi-Systematic Review on Effectiveness of Social and Cultural Sustainability Practices in Built Environment
Authors: Asif Ali, Daud Salim Faruquie
Abstract:
With the advancement of knowledge about the utility and impact of sustainability, its feasibility has been explored into different walks of life. Scientists, however; have established their knowledge in four areas viz environmental, economic, social and cultural, popularly termed as four pillars of sustainability. Aspects of environmental and economic sustainability have been rigorously researched and practiced and huge volume of strong evidence of effectiveness has been founded for these two sub-areas. For the social and cultural aspects of sustainability, dependable evidence of effectiveness is still to be instituted as the researchers and practitioners are developing and experimenting methods across the globe. Therefore, the present research aimed to identify globally used practices of social and cultural sustainability and through evidence synthesis assess their outcomes to determine the effectiveness of those practices. A PICO format steered the methodology which included all populations, popular sustainability practices including walkability/cycle tracks, social/recreational spaces, privacy, health & human services and barrier free built environment, comparators included ‘Before’ and ‘After’, ‘With’ and ‘Without’, ‘More’ and ‘Less’ and outcomes included Social well-being, cultural co-existence, quality of life, ethics and morality, social capital, sense of place, education, health, recreation and leisure, and holistic development. Search of literature included major electronic databases, search websites, organizational resources, directory of open access journals and subscribed journals. Grey literature, however, was not included. Inclusion criteria filtered studies on the basis of research designs such as total randomization, quasi-randomization, cluster randomization, observational or single studies and certain types of analysis. Studies with combined outcomes were considered but studies focusing only on environmental and/or economic outcomes were rejected. Data extraction, critical appraisal and evidence synthesis was carried out using customized tabulation, reference manager and CASP tool. Partial meta-analysis was carried out and calculation of pooled effects and forest plotting were done. As many as 13 studies finally included for final synthesis explained the impact of targeted practices on health, behavioural and social dimensions. Objectivity in the measurement of health outcomes facilitated quantitative synthesis of studies which highlighted the impact of sustainability methods on physical activity, Body Mass Index, perinatal outcomes and child health. Studies synthesized qualitatively (and also quantitatively) showed outcomes such as routines, family relations, citizenship, trust in relationships, social inclusion, neighbourhood social capital, wellbeing, habitability and family’s social processes. The synthesized evidence indicates slight effectiveness and efficacy of social and cultural sustainability on the targeted outcomes. Further synthesis revealed that such results of this study are due weak research designs and disintegrated implementations. If architects and other practitioners deliver their interventions in collaboration with research bodies and policy makers, a stronger evidence-base in this area could be generated.Keywords: built environment, cultural sustainability, social sustainability, sustainable architecture
Procedia PDF Downloads 40183 Single Stage Holistic Interventions: The Impact on Well-Being
Authors: L. Matthewman, J. Nowlan
Abstract:
Background: Holistic or Integrative Psychology emphasizes the interdependence of physiological, spiritual and psychological dynamics. Studying “wholeness and well-being” from a systems perspective combines innovative psychological science interventions with Eastern orientated healing wisdoms and therapies. The literature surrounding holistic/integrative psychology focuses on multi-stage interventions in attempts to enhance the mind-body experiences of well-being for participants. This study proposes a new single stage model as an intervention for UG/PG students, time-constrained workplace employees and managers/leaders for improved well-being and life enhancement. The main research objective was to investigate participants’ experiences of holistic and mindfulness interventions for impact on emotional well-being. The main research question asked was if single stage holistic interventions could impact on psychological well-being. This is of consequence because many people report that a reason for not taking part in mind-body or wellness programmes is that they believe that they do not have sufficient time to engage in such pursuits. Experimental Approach: The study employed a mixed methods pre-test/post-test research design. Data was analyzed using descriptive statistics and interpretative phenomenological analysis. Purposive sampling methods were employed. An adapted mindfulness measurement questionnaire (MAAS) was administered to 20 volunteer final year UG student participants prior to the single stage intervention and following the intervention. A further post-test longitudinal follow-up took place one week later. Intervention: The single stage model intervention consisted of a half hour session of mindfulness, yoga stretches and head and neck massage in the following sequence: Mindful awareness of the breath, yoga stretches 1, mindfulness of the body, head and neck massage, mindfulness of sounds, yoga stretches 2 and finished with pure awareness mindfulness. Results: The findings on the pre-test indicated key themes concerning: “being largely unaware of feelings”, “overwhelmed with final year exams”, “juggling other priorities” , “not feeling in control”, “stress” and “negative emotional display episodes”. Themes indicated on the post-test included: ‘more aware of self’, ‘in more control’, ‘immediately more alive’ and ‘just happier’ compared to the pre-test. Themes from post-test 2 indicated similar findings to post-test 1 in terms of themes. but on a lesser scale when scored for intensity. Interestingly, the majority of participants reported that they would now seek other similar interventions in the future and would be likely to engage with a multi-stage intervention type on a longer-term basis. Overall, participants reported increased psychological well-being after the single stage intervention. Conclusion: A single stage one-off intervention model can be effective to help towards the wellbeing of final year UG students. There is little indication to suggest that this would not be generalizable to others in different areas of life and business. However this study must be taken with caution due to low participant numbers. Implications: Single stage one-off interventions can be used to enhance peoples’ lives who might not otherwise sign up for a longer multi-stage intervention. In addition, single stage interventions can be utilized to help participants progress onto longer multiple stage interventions. Finally, further research into one stage well-being interventions is encouraged.Keywords: holistic/integrative psychology, mindfulness, well-being, yoga
Procedia PDF Downloads 35382 Impact of Anthropogenic Stresses on Plankton Biodiversity in Indian Sundarban Megadelta: An Approach towards Ecosystem Conservation and Sustainability
Authors: Dibyendu Rakshit, Santosh K. Sarkar
Abstract:
The study illustrates a comprehensive account of large-scale changes plankton community structure in relevance to water quality characteristics due to anthropogenic stresses, mainly concerned for Annual Gangasagar Festival (AGF) at the southern tip of Sagar Island of Indian Sundarban wetland for 3-year duration (2012-2014; n=36). This prograding, vulnerable and tide-dominated megadelta has been formed in the estuarine phase of the Hooghly Estuary infested by largest continuous tract of luxurious mangrove forest, enriched with high native flora and fauna. The sampling strategy was designed to characterize the changes in plankton community and water quality considering three diverse phases, namely during festival period (January) and its pre - (December) as well as post (February) events. Surface water samples were collected for estimation of different environmental variables as well as for phytoplankton and microzooplankton biodiversity measurement. The preservation and identification techniques of both biotic and abiotic parameters were carried out by standard chemical and biological methods. The intensive human activities lead to sharp ecological changes in the context of poor water quality index (WQI) due to high turbidity (14.02±2.34 NTU) coupled with low chlorophyll a (1.02±0.21 mg m-3) and dissolved oxygen (3.94±1.1 mg l-1), comparing to pre- and post-festival periods. Sharp reduction in abundance (4140 to 2997 cells l-1) and diversity (H′=2.72 to 1.33) of phytoplankton and microzooplankton tintinnids (450 to 328 ind l-1; H′=4.31 to 2.21) was very much pronounced. The small size tintinnid (average lorica length=29.4 µm; average LOD=10.5 µm) composed of Tintinnopsis minuta, T. lobiancoi, T. nucula, T. gracilis are predominant and reached some of the greatest abundances during the festival period. Results of ANOVA revealed a significant variation in different festival periods with phytoplankton (F= 1.77; p=0.006) and tintinnid abundance (F= 2.41; P=0.022). RELATE analyses revealed a significant correlation between the variations of planktonic communities with the environmental data (R= 0.107; p= 0.005). Three distinct groups were delineated from principal component analysis, in which a set of hydrological parameters acted as the causative factor(s) for maintaining diversity and distribution of the planktonic organisms. The pronounced adverse impact of anthropogenic stresses on plankton community could lead to environmental deterioration, disrupting the productivity of benthic and pelagic ecosystems as well as fishery potentialities which directly related to livelihood services. The festival can be considered as multiple drivers of changes in relevance to beach erosion, shoreline changes, pollution from discarded plastic and electronic wastes and destruction of natural habitats resulting loss of biodiversity. In addition, deterioration in water quality was also evident from immersion of idols, causing detrimental effects on aquatic biota. The authors strongly recommend for adopting integrated scientific and administrative strategies for resilience, sustainability and conservation of this megadelta.Keywords: Gangasagar festival, phytoplankton, Sundarban megadelta, tintinnid
Procedia PDF Downloads 23481 The Implantable MEMS Blood Pressure Sensor Model With Wireless Powering And Data Transmission
Authors: Vitaliy Petrov, Natalia Shusharina, Vitaliy Kasymov, Maksim Patrushev, Evgeny Bogdanov
Abstract:
The leading worldwide death reasons are ischemic heart disease and other cardiovascular illnesses. Generally, the common symptom is high blood pressure. Long-time blood pressure control is very important for the prophylaxis, correct diagnosis and timely therapy. Non-invasive methods which are based on Korotkoff sounds are impossible to apply often and for a long time. Implantable devices can combine longtime monitoring with high accuracy of measurements. The main purpose of this work is to create a real-time monitoring system for decreasing the death rate from cardiovascular diseases. These days implantable electronic devices began to play an important role in medicine. Usually implantable devices consist of a transmitter, powering which could be wireless with a special made battery and measurement circuit. Common problems in making implantable devices are short lifetime of the battery, big size and biocompatibility. In these work, blood pressure measure will be the focus because it’s one of the main symptoms of cardiovascular diseases. Our device will consist of three parts: the implantable pressure sensor, external transmitter and automated workstation in a hospital. The Implantable part of pressure sensors could be based on piezoresistive or capacitive technologies. Both sensors have some advantages and some limitations. The Developed circuit is based on a small capacitive sensor which is made of the technology of microelectromechanical systems (MEMS). The Capacitive sensor can provide high sensitivity, low power consumption and minimum hysteresis compared to the piezoresistive sensor. For this device, it was selected the oscillator-based circuit where frequency depends from the capacitance of sensor hence from capacitance one can calculate pressure. The external device (transmitter) used for wireless charging and signal transmission. Some implant devices for these applications are passive, the external device sends radio wave signal on internal LC circuit device. The external device gets reflected the signal from the implant and from a change of frequency is possible to calculate changing of capacitance and then blood pressure. However, this method has some disadvantages, such as the patient position dependence and static using. Developed implantable device doesn’t have these disadvantages and sends blood pressure data to the external part in real-time. The external device continuously sends information about blood pressure to hospital cloud service for analysis by a physician. Doctor’s automated workstation at the hospital also acts as a dashboard, which displays actual medical data of patients (which require attention) and stores it in cloud service. Usually, critical heart conditions occur few hours before heart attack but the device is able to send an alarm signal to the hospital for an early action of medical service. The system was tested with wireless charging and data transmission. These results can be used for ASIC design for MEMS pressure sensor.Keywords: MEMS sensor, RF power, wireless data, oscillator-based circuit
Procedia PDF Downloads 58980 A Systemic Review and Comparison of Non-Isolated Bi-Directional Converters
Authors: Rahil Bahrami, Kaveh Ashenayi
Abstract:
This paper presents a systematic classification and comparative analysis of non-isolated bi-directional DC-DC converters. The increasing demand for efficient energy conversion in diverse applications has spurred the development of various converter topologies. In this study, we categorize bi-directional converters into three distinct classes: Inverting, Non-Inverting, and Interleaved. Each category is characterized by its unique operational characteristics and benefits. Furthermore, a practical comparison is conducted by evaluating the results of simulation of each bi-directional converter. BDCs can be classified into isolated and non-isolated topologies. Non-isolated converters share a common ground between input and output, making them suitable for applications with minimal voltage change. They are easy to integrate, lightweight, and cost-effective but have limitations like limited voltage gain, switching losses, and no protection against high voltages. Isolated converters use transformers to separate input and output, offering safety benefits, high voltage gain, and noise reduction. They are larger and more costly but are essential for automotive designs where safety is crucial. The paper focuses on non-isolated systems.The paper discusses the classification of non-isolated bidirectional converters based on several criteria. Common factors used for classification include topology, voltage conversion, control strategy, power capacity, voltage range, and application. These factors serve as a foundation for categorizing converters, although the specific scheme might vary depending on contextual, application, or system-specific requirements. The paper presents a three-category classification for non-isolated bi-directional DC-DC converters: inverting, non-inverting, and interleaved. In the inverting category, converters produce an output voltage with reversed polarity compared to the input voltage, achieved through specific circuit configurations and control strategies. This is valuable in applications such as motor control and grid-tied solar systems. The non-inverting category consists of converters maintaining the same voltage polarity, useful in scenarios like battery equalization. Lastly, the interleaved category employs parallel converter stages to enhance power delivery and reduce current ripple. This classification framework enhances comprehension and analysis of non-isolated bi-directional DC-DC converters. The findings contribute to a deeper understanding of the trade-offs and merits associated with different converter types. As a result, this work aids researchers, practitioners, and engineers in selecting appropriate bi-directional converter solutions for specific energy conversion requirements. The proposed classification framework and experimental assessment collectively enhance the comprehension of non-isolated bi-directional DC-DC converters, fostering advancements in efficient power management and utilization.The simulation process involves the utilization of PSIM to model and simulate non-isolated bi-directional converter from both inverted and non-inverted category. The aim is to conduct a comprehensive comparative analysis of these converters, considering key performance indicators such as rise time, efficiency, ripple factor, and maximum error. This systematic evaluation provides valuable insights into the dynamic response, energy efficiency, output stability, and overall precision of the converters. The results of this comparison facilitate informed decision-making and potential optimizations, ensuring that the chosen converter configuration aligns effectively with the designated operational criteria and performance goals.Keywords: bi-directional, DC-DC converter, non-isolated, energy conversion
Procedia PDF Downloads 10079 The Link Between Success Factors of Online Architectural Education and Students’ Demographics
Authors: Yusuf Berkay Metinal, Gulden Gumusburun Ayalp
Abstract:
Architectural education is characterized by its distinctive amalgamation of studio-based pedagogy and theoretical instruction. It offers students a comprehensive learning experience that blends practical skill development with critical inquiry and conceptual exploration. Design studios are central to this educational paradigm, which serve as dynamic hubs of creativity and innovation, providing students with immersive environments for experimentation and collaborative engagement. The physical presence and interactive dynamics inherent in studio-based learning underscore the indispensability of face-to-face instruction and interpersonal interaction in nurturing the next generation of architects. However, architectural education underwent a seismic transformation in response to the global COVID-19 pandemic, precipitating an abrupt transition from traditional, in-person instruction to online education modalities. While this shift introduced newfound flexibility in terms of temporal and spatial constraints, it also brought many challenges to the fore. Chief among these challenges was maintaining effective communication and fostering meaningful collaboration among students in virtual learning environments. Besides these challenges, lack of peer learning emerged as a vital issue of the educational experience, particularly crucial for novice students navigating the intricacies of architectural practice. Nevertheless, the pivot to online education also laid bare a discernible decline in educational efficacy, prompting inquiries regarding the enduring viability of online education in architectural pedagogy. Moreover, as educational institutions grappled with the exigencies of remote instruction, discernible disparities between different institutional contexts emerged. While state universities often contended with fiscal constraints that shaped their operational capacities, private institutions encountered challenges from a lack of institutional fortification and entrenched educational traditions. Acknowledging the multifaceted nature of these challenges, this study endeavored to undertake a comprehensive inquiry into the dynamics of online education within architectural pedagogy by interrogating variables such as class level and type of university; the research aimed to elucidate demographic critical success factors that underpin the effectiveness of online education initiatives. To this end, a meticulously constructed questionnaire was administered to architecture students from diverse academic institutions across Turkey, informed by an exhaustive review of extant literature and scholarly discourse. The resulting dataset, comprising responses from 232 participants, underwent rigorous statistical analysis, including independent samples t-test and one-way ANOVA, to discern patterns and correlations indicative of overarching trends and salient insights. In sum, the findings of this study serve as a scholarly compass for educators, policymakers, and stakeholders navigating the evolving landscapes of architectural education. By elucidating the intricate interplay of demographical factors that shape the efficacy of online education in architectural pedagogy, this research offers a scholarly foundation upon which to anchor informed decisions and strategic interventions to elevate the educational experience for future cohorts of aspiring architects.Keywords: architectural education, COVID-19, distance education, online education
Procedia PDF Downloads 44